The exact group of people at whom the learning program is aimed or targeted.
One of three behaviors exhibited by individuals in a group. Task-oriented behaviors contribute to the accomplishment of the task component, e.g., initiating, seeking, giving, elaborating, summarizing, coordinating and testing. See also Maintenance Behavior and Self-Oriented Behavior.
Johnson Graduate School of Management, Cornell University
An individual skill or knowledge item included in the learning program.
A pre-formatted form that simplifies the entry of data and speeds up development time, e.g., templates for lesson plans, storyboards, various question types.
An evaluative device or procedure in which a sample of the test-takers’ skill and knowledge in a specified domain is obtained and scored using a standardized process. Types of tests include:
Measures the degree to which the following three components are aligned: the learning objectives, the learning opportunities to achieve these objectives, and the method used to assess attainment of the learning objectives.
Test Blueprint or Test Specifications
Defines in detail the structure for the test forms: content areas, cognitive levels, format of items and responses, number or proportion of questions by content area, etc.
Includes purpose of the test, intended audience and other background information.
The principle that every test-taker should be assessed in an equitable way. Fair tests should be free of bias based on characteristics such as race, religion, gender or age.
The degree to which the test resembles the actual required performance.
Test for Understanding (TFU)
Questions asked during Presentation to ensure learners understand before they move to the planned Application. Sometimes called Knowledge Checks.
Measures the ability of a test to produce consistent scores or classifications. It is one of the most important criteria of the quality of a test. There are five types of reliability:
• Decision consistency
• Alternate forms reliability
• Test-retest reliability
• Internal consistency
• Interrater reliability
Results can only be considered reliable if the sample size used is large enough.
Estimate of the extent to which a test can provide consistent, stable test scores for the same group of test-takers across time.
The procedures designed to prevent cheating. Access to the content must be carefully restricted during the development, scoring and analysis of the test. A secure test is not for publication in any form or in any venue.
The extent to which test-takers know beforehand: the true purpose of the test, what will be tested, how it will be tested, the criteria for success and how individual questions will be scored.
The extent to which a test actually measures what it is supposed to measure. One of the most important criteria of the quality of a test and is dependent on reliability. A test cannot be valid if it is not reliable. There are three forms of validity:
• Content validity
• Concurrent validity
• Predictive validity
Since validity requires reliability and proving reliability requires data from an adequate sample size, the validity of results is also based on collecting data from a large enough sample.
A question that asks about the content itself. For example:
• The definition or illustration of…
• The reason behind…
• The relationship between…
• The differences/similarities between…
Tools and References
One of FKA’s performance parameters. These are devices, implements or written material used during performance of an ability or component. Job Aids are listed separately.
An input device layered on top of the visual display (screen) that lets the user make choices by touching icons or graphical buttons on the screen. It works by sensing the position and movements of the user’s finger(s) and passing that data to the running program.
Training Needs Analysis
An investigation to define, plan and cost-justify the best learning program for a given situation.
Any planned activity following a formal learning program that reinforces the skills and knowledge acquired during that program. A transfer activity reinforces on the job what was learned in the program, while a bridging activity moves the learner from the end-of-learning performance level to the required job performance level.
A set of activities developed to ensure all skills and knowledge learned during a formal learning program are transferred back to the job. It should include: a plan to reduce or eliminate barriers to the transfer; the roles and responsibilities of the learners, the learners’ managers and the learning organization. A transfer strategy helps learners transfer what was learned in the program to the job while a bridging strategy helps learners move from the end-of-learning performance level to the required job performance level.
See Test Transparency.
See Binary Choice Item.
(1) Instruction given to learners individually or in small groups by an instructor or ‘tutor’.
(2) A computer-based, interactive method of learning. The computer presents new concepts and skills through interactive text, illustrations, descriptions, questions and problems. Information is sequenced to build on previously learned concepts, and often provides feedback and guidance. One of the most common is a tutorial that provides instruction on the use of a computer system or software.
One type of coaching activity. In the workplace environment, tutoring is the process of teaching the coachee how to perform specific tasks. When done by a peer it is called peer-tutoring.