C

 A   B   C   D   E   F   G   H   I   J   K   L    M   N   O   P   Q   R   S   T   U   V   W   X   Y   Z 

Capacity

(1) Mental, emotional and physical power or capability; the inherent capability of an individual or system to learn or perform specified actions.
(2) One of eight performance factors; a deficiency in capacity can result in below standard performance.

Case Study

A detailed account of an event or a series of events presented for analysis or action by the learners. There are three major types of cases:
• In-basket
• Incident Process
• Critical Instance

Certification

Program and process where a learner completes prescribed learning program(s) and passes an assessment with a minimum acceptable score. To increase validity and assure authentication, the certification process should be proctored by an independent agent.

Characteristic

In FKA’s Instructional Systems Design Methodology, a characteristic is:
(1) an attribute of a concept; the concept is defined by its set of characteristics
(2) one of three factors used to calculate the relative priority of abilities and components: criticality, difficulty and frequency

Chat

Text-based real-time communication on the Internet. Can be used during online presentations to let participants ask and answer questions and communicate with the host and each other.

Circumstances

One of FKA’s performance parameters. It defines the surroundings, conditions, or situations in which an ability is performed on the job. It is identified during the performance analysis.

Clarifying

During active listening, if the listener does not understand what the speaker said, the listener can ask for more information to clear up his/her understanding. See also Confirming.

Classical Test Theory

The traditional approach to assessment which focuses on developing quality test forms. It can involve item analysis, reliability analysis and validity analysis as well as the criteria used to assemble test forms.

Classification

The process of categorizing test-takers into two or more discrete groups, such as pass/fail or master/non master.

Clear Example

A clear example is one of the three kinds of examples created during Concept Analysis. A clear example matches all of the characteristics of the concept.

Client

In a consulting relationship, the client can be one person or a group of people. The client has the Money to implement the intervention, the Authority to give approvals, and the Desire to see it through to a successful conclusion (MAD).

Closed Question

Has a limited number of logical answers, e.g., “Which instructional strategy would you recommend in this situation?” In a written test closed questions can be implemented as True/False, multiple-choice, matching, fill-in-the-blank or short answer.

Coaching

A form of on-the-job performance support. The process of providing feedback, insight and guidance to individuals to help them attain their full potential in their business or personal life. Coaching can include counseling, mentoring and tutoring activities.

Cognitive Domain

The area of brain function that handles mental processes. The revised Bloom’s Taxonomy (2001) divides this domain into six levels, which from lowest to highest are: Remember, Understand, Apply, Analyze, Evaluate and Create. In general, different action verbs are used for the objectives for each of the different cognitive levels.

Cognitive Neuroscience

The study of higher cognitive functions that exist in humans, and their underlying neural bases. Cognitive neuroscience draws from linguistics, neuroscience, psychology and cognitive science. Cognitive neuroscientists explore the nature of cognition (the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses) from a neural point of view.

Collaborative Role

One of three roles performance consultants can play. The collaborative consultant works jointly with the client to resolve a problem or address a business opportunity. The consultant’s specialized technical knowledge is coupled with the client’s knowledge of the organization in a joint problem-solving relationship. See also Pair-of-Hands Role and Expert Role.

Committee

A type of discussion. A small group is selected to perform a task that cannot be handled efficiently by a large group. They then report back to the large group for direction and evaluation.

Communities of Practice

Groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly. It has an identity defined by a shared domain of interest. In pursuing their interest in their domain, members engage in joint activities and discussions, help each other, and share information. Members of a community of practice are practitioners.

Competence, Levels of

In 1982 William Howell described the four levels of competence:
• Unconscious Incompetence – “I don’t know that I don’t know how to do it.”
• Conscious Incompetence – “I know that I don’t know how to do it.”
• Conscious Competence – “I can do it, but I have to think about it.”
• Unconscious Competence – “I can do it without even thinking about it.”

Competency

In FKA’s Instructional Systems Design Methodology, a competency is a cluster of related skills, knowledge and attitudes required by a number of job categories for a very broad population, such as, computer skills or problem-solving skills. It applies to performance on the job and can be measured against well-accepted standards.

Competency Analysis

In FKA’s Instructional Systems Design Methodology, competency analysis examines various capabilities exhibited by individuals in different jobs and organizational levels, e.g., effective communication skills, or critical thinking skills.

Completion Item

A type of test question which requires the test-taker to complete a statement by filling in the missing words or phrases in the blank spaces. Also called fill-in-the-blank. Tests recall of knowledge.

Component

In FKA’s Instructional Systems Design Methodology, a component is the first level breakdown of an ability.

Concept

A concrete or abstract idea that cannot be easily defined by a synonym; a group or class or objects formed by combining all of their aspects or characteristics, e.g., closed question. Concepts are taught through a series of three types of examples: clear, divergent and near non-example.

Concept Analysis

In FKA’s Instructional Systems Design Methodology, concept analysis identifies the characteristics of a concept and provides examples to clarify the definition. Concept analysis is a vehicle for confirming understanding of the concept with a subject matter expert and planning how it may be effectively communicated to others through the use of examples.

Concurrent Validity

Measures the degree to which the scores on one test are related to the scores on another, already established test, administered at the same time, or are related to some other valid criterion available at the same time.

Conditions

It is one of eight performance factors; the characteristics of the environment within which job performance takes place. Unfavorable conditions can result in poor performance.

Confidence Interval

A numeric range, based on a sample, within which the population scores/statistics are expected to fall a specified proportion of the time. That is, the confidence level is at least 95%. Confidence intervals are expressed as “plus or minus” a value usually between 3% and 10%. Wider intervals indicate lower precision; narrower intervals indicate greater precision.

Confidence Level

The degree of certainty that a statistical prediction is accurate. Generally, a confidence level of 95% to 99% is considered acceptable; most researchers use 95%. A 95% confidence level means you can be 95% certain that the results from a sample, plus or minus a confidence interval, will hold true for the whole population that the sample represents.
For example, if 82% of a sample group passes a test (and your sample size was adequate), you can predict with 95% accuracy that the population will have the same results, plus or minus the confidence interval: “The population should score 82% ± 5%, 19 times out of 20 (95% of the time).”

Confirming

During active listening, the listener repeats what he/she understood the speaker was saying. The speaker can then validate the listener’s understanding or add more information to clarify. See also Clarifying.

Confusing TFU (Test for Understanding)

A weak learning interaction, in which the TFU is a fill-in-the-blank item that is poorly constructed and has too many blanks. Even if the learner knows the correct responses it is not clear which answers fit into which spaces. There is a high probability that the learner will give at least one incorrect response even if he/she understood the content.

Constraints

Restrictions affecting the project, design, development, delivery, and job environments. Constraints should be identified as early as possible in Needs Identification during the Context Analysis. Known limits on time, budget, equipment, human resources, and facility constraints are a few examples.

Content Analysis

In FKA’s Instructional Systems Design Methodology, content analysis examines the body of information needed to perform a job, e.g., new product information or health and safety regulations.

Content Validity

Measures the degree to which a test measures the intended content area and samples the total of that area. It is determined by subject matter experts.

Context Analysis

In FKA’s Instructional Systems Design Methodology, context analysis is the process of identifying factors that impact the design, development or delivery of the proposed learning program. The intent of a context analysis is to provide information to the development team that will allow them to make decisions that are effective given the project, design and development, delivery and job parameters and constraints.

Context Parameters

Sets of project, design and development, delivery and job considerations that describe the circumstance or environment in which the learning solution must work.

Corporate Training Plan

See Preliminary Learning Plan.

Cost

In cost-benefit-analysis, the cost is the total dollar value of the intervention, including analysis, design, development, implementation, validation and evaluation.

Cost-Benefit Analysis

The comparison of the total cost of designing and delivering the learning program with the anticipated benefit of the resulting improved performance.

Counseling

In a workplace environment, this aspect of coaching should focus on helping the coachee identify and solve his or her own personal or professional problems.

Course

In FKA’s Instructional Systems Design Methodology, one course corresponds to one responsibility in the Model of Performance (MoP).

Course Map

Usually a visual representation showing the modules, and possibly the lessons within the course, and the recommended completion order for these components.

Courseware

(1) The software that provides learning content and instruction via computer program.
(2) Any instructional materials used by instructors/facilitators and learners during the learning program.

Credibility

The quality of being believable or trustworthy. Instructors/facilitators/coaches can achieve professional credibility if the learners believe that they have good interpersonal and communication skills, can effectively manage the learning/coaching situation and are sufficiently knowledgeable on the subject.

Crediting Feedback

Gives recognition to a person for the purpose of maintaining or enhancing his/her good performance. Effective credits are more than a pat on the back or a vague statement such as, “Good job.” Effective crediting feedback provides information that helps the person maintain adequate or superior performance and motivates him/her to meet or exceed standards.

Criterion

Standard by which something is measured.

Criterion Test

In FKA’s Instructional Systems Design Methodology, the criterion test is the test at the end of a module. It should be designed to assess whether or not the module objective has been achieved. Also called a module test. See also Summative Evaluation (or Assessment).

Criterion-Referenced Tests (CRTs)

Assessment that involves measurements to divide test-takers into two or more distinct groups by comparing their scores to an established standard, not how they compare to other test-takers. Certification exams are usually CRTs not norm-referenced tests (NRTs).

Critical Instance

A type of case study that involves a short, narrative description of an event or situation. The learner is required to explain what is being described and to provide recommended actions to be taken.

Criticality

In FKA’s Instructional Systems Design Methodology, criticality is one of three characteristics used to rate the relative priority of abilities and components. The more critical an ability is to the job, the higher priority it will be given to be included in the learning program. The other two characteristics used to calculate the priority value are difficulty and frequency. See also Priority Value.

Curriculum

A series of related courses. In FKA’s Instructional Systems Design Methodology, the curriculum is the subset of the Model of Learning (MoL) to be included in the formal learning program after exclusions have been made. Curriculum + bridging strategy(ies) = MoL

Cut Score

The passing score that divides test-takes into two categories; those at or above the score, and those below. It can be used to classify test-takers into categories such as: pass/fail, qualified/unqualified, master/non master or selected/rejected.

Comments are closed.