Glassory of Research

92
[TYPE THE DOCUMENT TITLE] [Type the abstract of the document here. The abstract is typically a short summary of the contents of the document. Type the abstract of the document here. The abstract is typically a short summary of the contents of the document.] GLASSORY OF RESEARCH Research Methodology Admin

description

Very important for new researcher. This helps you to know necessary terminology for research. It develops clear concept of certain terms.

Transcript of Glassory of Research

Page 1: Glassory of Research

[ ][Type the abstract of the document here. The abstract is typically a short summary of the contents of the document. Type the abstract of the document here. The abstract is typically a short summary of the contents of the document.]

GLASSORY OF RESEARCH

Research Methodology

Admin

Page 2: Glassory of Research

Glossary

This Glossary provides definitions for key terms used in the previous chapters. Most definitions include other key terms. Key terms used in the definition of other key terms are in bold type. This lets you go to any term you encounter and find its meaning.

In addition, other glossaries of social research and statistical terms are available on Web sites. The names and Web addresses of some of these Glossaries are listed under "Aids - Internet Resources" at the end of Chapter 17

A

Abstract (abstraction) - a mental image of something that people experience and agree to describe in a certain way; concepts for example, are abstractions derived from observations and defined in scientific terms; abstract is the opposite of concrete, which refers to the specific things we experience and can observe

(An) abstract: a short summary of a publication, usually about 250 words

Alternative hypothesis: the original hypothesis formulated at the beginning of a research project

Analysis: the process of summarizing and organizing   data to establish the results of an investigation

Analysis of variance: a statistical test used to determine if differences among three or more means are statistically significant

Anonymity: the assurance given to respondents that no one, not even the investigator will be able to identify the respondent or any data supplied by or about the respondent

Area sample: see cluster sample

Assessment research: research undertaken to see if a program is achieving the objectives set for it; also referred to as evaluation research

Association (or associated): refers to the extent to which one variable is related to another variable; a measure of how changes in one variable influence changes in another variable

Attribute: the elements that make up a variable; may be expressed either in words (male or female) or in numbers

Available data: data that already exists in the form of responses to previoussurveys, as mass media material, or as other written, audio, video, or cultural artifacts

Average: a loose term used in everyday language to describe one form of the central tendency of a distribution; statisticians use mean in place of average; two other averages or "typical" scores for a distribution are the median and the mode

Page 3: Glassory of Research

B

Back translation: the translation of a document that was translated into a new language and then back to the   original language

Bar chart: a graphic way of presenting data in which bars representing the attributesof a variable are arranged along the X axis of a graph and the height of the bars, as measured on the Y axis, show the frequency for each attribute; also known as ahistogram

Bias: any tendency to see events in a certain way that causes distortions in the collection or analysis of data or in drawing conclusions from findings

Bimodal distribution: a distribution with two modes

Bivariate analysis: the simultaneous analysis of two variables; bivatiate analysis is generally done to find the extent of association between two variables

Bogardus social distance scale: a measurement technique for finding how closelyrespondents say they are willing to associate with members of some designated group; social distance scales are used to measure attitudes toward some group of persons

Browser: an Internet -based service that allows a computer to connect with the Internet

Browsing: casual examination of books or other materials in search of relevant materials; one can also browse among Web sites, using links on sites to move from one site to another; this form of browsing is called surfing

C

Call back - the act of making a second or third visit to a respondent to obtain aninterview

Case study: a detailed investigation of a person, organization, village or other entity for the purpose of understanding the entity in all its complexity as fully as possible

Casual observation: observation of behavior in which actions are recorded in narrative form; stands in contrast to structured observation where observations are noted in terms of pre-defined categories

Categorical variable: a variable whose attributes form some kind of a classification; the categories used form the elements of the classification; male and female, for example, would be categories of the classification of persons based on gender; categorical variables are also referred to as qualitative variables

Page 4: Glassory of Research

Causal hypothesis testing: testing a hypothesis under carefully controlled conditions, as in a true experiment, to exclude the influence of any variable other than the independent or experimenta l variable upon the dependent variable; under these conditions, changes in the dependent variable are assumed to be caused by the independent or experimental variable

Cause and effect (or causal relationship): refers to a relationship where onevariable is thought to be solely or substantially responsible for changes in another variable; see the definition of causal hypothesis testing

CD-ROM: stands for "compact disk read only memory," a form of electronic storage for music, data files and other information; is "read" or played with the help of a computer

Cell: a part of a table identified by the intersection of a column and a row of the table

Census: collection of data from all the members of some population; also calledenumeration

Central tendency: measures of the degree to which scores are clustered around themean of a distribution

Chart format: used when the same question is repeated with the same response categories; example, when asking for the ages of all members of a household

Chain sample: see network sampling

Chance selection: see random selection

Chi square: a statistical test for determining whether two variables are independentof one another; chi square is based on comparing differences between observed and expected frequencies for various cells in a table

Chronbach's alpha:   used in item analysis to select items that are highly associated with the other items in a composite measure; items whose scorescorrelate moderately with other items are assumed to be measuring the same thing and, therefore, the scores can be safely combined to provide a composite measure

Class interval width: closely related to class limits;   any whole number (22, 51, 175, etc) is really the midpoint of range that extends 0.5 below and 0.5 above the number, thus the interval of 20-29 has a class width of 19.49 to 29.49 with 25.0 as its midpoint

Class limits: the range of numbers that are created when continuous data are combined to form broader categories or intervals; for example, exact ages can be combined into intervals, such as 20-29, 30-39, etc.; the ten year categories are the class limits for the age intervals

Classical experiment: a technique for testing hypotheses under carefully controlled conditions, where the experimental or independent variable is administered to theexperimental group but not to an equivalent control group and measurements of

Page 5: Glassory of Research

the dependent variable are compared between the two groups following the experiment; also called the true experiment

Closed item: a question or item with a fixed set of responses; respondents are asked to select the response that most closely matches their views

Cluster sampling: a probability sampling design based on random selection of successive clusters or units with a simple random sample used in the final cluster to form the final sample; also referred to as an area or multistage sample

Codebook: a record book used to provide information about variables, theirattributes, and their locations in a data file; a codebook is used to plan analyses of variables and to interpret the results of analyses

Code transfer sheet: a sheet of paper with columns for recording the attributes ofvariables and with rows for each respondent or case; the code for each response or observation is placed at the intersection of the column for the attribute and the row for a particular respondent

Coding: the process of assigning numbers to represent the attributes of indicators;coding is a necessary step before data can be entered into a computer data filebecause computers can only "read" numbers

Coefficient of correlation: a statistical measure of association between twoquantitative variables; a coefficient of correlation can vary between ±1.00

Coefficient of determination: the squared value of the coefficient of correlation; it indicates the percentage of the variation in dependent variable accounted for by the effect of the independent variable

Coefficient of reproducibility: the measure of the extent to which responses to a set of items form a Guttman scale; a coefficient of.90 or higher is the generally accepted coefficient

Comparison analysis: a research design based on two or more independentsamples, used to estimate how much difference there is among the samples in terms of variables being measured 

Composite measure (or score): scores or other measures based on two or moreindicators; examples are scales and indexes, each of which consist of at least twoitems

Computer analysis: analysis of data using a statistical software analysis packagestored in a computer

Concept - an abstract description we use to describe things that are real to us but that we cannot experience directly; mental images we share and use to describe things we talk about

Page 6: Glassory of Research

Conceptualization: the process of defining concepts central to an investigation; also includes specifying and defining dimensions of a concept for which measurementswill be developed

Conclusion: the most general statement derived from the results of an investigation; the investigator draws conclusions from the analysis of data collected for the investigation

Concrete: refers to specific things we can experience directly; the specific, identifiable chair you are sitting on is considered concrete; in contrast, the idea of a chair isabstract

Concurrent validity: a method for estimating the validity of a measuring instrument, such as a scale, based on showing that scores on the instrument differentiate between persons known to differ in the variable being measured; example, in developing a scale to measure attitudes toward some group, the scores of persons known to hold strong positive and negative views toward that group would be compared; if the mean scores of the two groups were substantially different, the scale would be assumed to have demonstrated concurrent validity

Confidence interval: the range of values that contain the population parameter at a specified level of confidence; if a mean is estimated to lie between 5.05 and 8.15, the confidence interval is 5.05 to 8.15

Confidence level: see level of confidence

Confidentiality: the assurance given a respondent that even though the investigator can identify the respondent his or her responses, the investigator will protect the respondent's identity respondent

Construct: another term used for concept; a construct is a definition of a variable we intend to investigate; the term is used because, as social scientists, we construct a definition of a variable for the purposes of measurement

Contamination: occurs when members of a control group are accidentally or otherwise exposed to the experimental variable

Content analysis: a method for analyzing the content of written or verbal material; most often used in the analysis of mass media materials; based on development of a set of categories for the coding the content of the material

Content validity: a form of validity based on how well the content of an indicato r reflects the concept it is intended to measure

Contingency question: a question or item used to select respondents for further questions, depending on how they answer a preceding question; for example, before asking persons which political party they belong to, they could be asked if they now belong to a party, only those who answered "yes" would then be asked for the name of the party; also called a filter question

Contingency table: see cross classification table

Page 7: Glassory of Research

Continuous variable: a variable whose attributes can assume increasingly smaller or larger values; examples are age or income, each of which can be measured in smaller and smaller amounts

Control group: the group in an experiment that is not exposed to the experimentalor independent variable but is selected to match the experimental group, which is exposed to the experimental or independent variable, in all other ways

Control variable: a variable that is held constant to remove its influence on other variables

Controlled comparison: a multivariable analysis in which a control variable is introduced to see if it causes changes in a relationship between other variables

Controlled setting: any situation created by an investigator for the purpose of  hypothesis testing in which selected variables are controlled to minimize their influence on the outcome of the research

Convenience sample: a nonprobability form of sampling based on collecting datafrom who ever is available or encountered; also called a haphazard sample

Copy: the process of making a copy of material from a data file or Web site by using the copy function of a computer program

Correlation (coefficient of): a statistical measure of the empirical associationbetween two indicators; also referred to as the coefficient of correlation; values for correlation coefficients vary between ±1.00

Criterion validity: the extent to which an indicator for a concep t is associated with an eternal criterion;   for example, the validity of a test given in secondary school for predicting success in the university is shown by its ability to predict grade point averages at the end of the freshman year at the university; also referred to aspredictive validity

Cross classification: analysis based on showing the relationship between twovariables in categorical form; done in the form of bivariate or multivariate tables

Cross classification table (cross-tabulation table):

a table showing the relationship between two variables; the data for one variable is displayed in columns and data for the other variable in rows of the table; also referred to as a contingency table

Cross products: the products of the scores of two variables, required for the calculation of the coefficient of correlation and other statistics

Cross-sectional design: a design used for surveys; based on use of a probability sample so that the sample represents a cross-section of a population

Page 8: Glassory of Research

Cumulative frequency distribution: a distribution in which the frequency for eachattribute is added to the next higher or lower attribute in the distribution, beginning with the lowest attribute and adding down the distribution or with the highest attribute and adding up the distribution; cumulative frequency distributions are useful for saying how many respondents answered above or below a certain attribute

Cumulative percentage distribution: a distribution in which the percentage for each attribute is added to the next higher or lower attribute in the distribution, beginning with the lowest attribute and adding down the distribution or with the highest attribute and adding up the distribution; cumulative percentage distributions are useful for saying what percentage of respondents answered above or below a certain attribute

Curvilinear relationship: a relationship between two variables in which the direction of the relation moves in one direction and then reverses;   for example, infant mortality rates are high for the youngest mothers, then decline as mothers are older, only to rise again for the oldest mothers;   also called a nonlinear relationship

D

Data: the specific bits of information collected by a scientifically valid method of collection; can be in the form of observation, by means of an experiment, or by asking persons questions as part of a survey

Data collection: the planned, systematic process of obtaining data to answer a research question

Data cleaning: the process of reviewing codes for attribute s entered into a computer data file to find and correct errors

Database: a searchable computer-based compilation of information on a topic or covering a discipline

Data entry: the process of entering codes into a data file stored in a computer; data entry must be done according to the rules of the software program being used

Data file: coded data stored in a computer according to locations specified in acodebook

Date modification: changing or adding data after the initial data were coded;examples include developing composite scores or recoding open-ended responses to form new categories

Deductive logic: a form of reasoning from a general principle or statement, often based on a theoretical framework; for example, derivation of a hypothesis from atheoretical framework

Degrees of freedom: a value used in interpreting tests of statistical significance;degrees of freedom are calculated in different ways for different tests of significance

Page 9: Glassory of Research

Dependent variable: the variable that depends on or is influenced by another variable; dependent variables are what researchers seek to understand and explain

Descriptive research: investigations whose purpose is to provide precise descriptions of variables and their relationships; surveys are frequently used as designs for descriptive research

Descriptive statistical analysis: analysis of data   to describe the characteristics ofsample or for measuring relationships between variables; examples include measures of central tendency (mean, median and mode), measures of variability (variance and standard deviation) or measures of association (correlation, chi square)

Descriptive statistics: statistics used to describe features of distributions of scores,such as means and standard deviations

Design: a plan for the collection and analysis of data; includes selection of a method of collecting data, ways of measuring variables, a sampling plan, and plans for the analysis of the data to be collected

Dimension: a specified and defined aspect or component of a concept selected formeasurement; dimensions of a concept are identified by the process of conceptualization

Direct relationship: see positive relationship

Discrete variable: a variable whose attributes cannot be separated into smaller units; for example, gender exists in only two forms - male or female

Distribution: an ordered set of numbers showing how many times each occurred, from the lowest to the highest number or the reverse

Download: the act of copying information from a computer-based file, such as those found on Web sites, to the hard or floppy drive of a computer

Draft questionnaire: the form of a questionnaire ready for pretesting; a draft questionnaire is usually revised based on information obtained during one or morepretests

E

Ecological fallacy: an error in drawing a conclusion about the behavior or attitudes of individuals when data are collected at the level of groups to which individuals may belong

Edge coding: a way of showing codes for responses in which the codes assigned to responses are written in the margin of the questionnaire opposite the item to which they refer

Page 10: Glassory of Research

Email: stands for "electronic mail," a form of communication using the Internet as a way of connecting to persons you wish to communicate with

Email survey: a survey conducted by sending a questionnaire by email to a list orsample of email addresses; respondents are asked to complete and return the questionnaire by email

Empirical: refers to using one's senses (sight, hearing, touch, smelling, and tasting) to learn about events; empirical research is based on measurement of observable events

Empirical generalization: a statement or conclusion based on empirical results;basing a conclusion on a relationship between two indicators is an example of an empirical generalization

Empirical relationship: a measured or observed relationship based on data for twovariables

Empiricism: the use of one's senses to observe and record events external to ourselves; scientific inquiry is based on knowledge derived from observation

Enumeration: the process of collecting data from all the members of a population;also called taking a census

Equivalent forms measure of reliability: a technique for estimating reliabilitybased on the degree to which results from two equivalent scales or sets of observations are associated; a high level of association indicates high reliability

Evaluation research: research undertaken to see whether a program or activity is meeting or has met the objectives set for it

Executive summary: a summary of a report prepared to give a brief but complete description of the purpose, methods used, results, and conclusions of an investigation; executive summaries are often written to be understood by persons in administrative positions and those without research training

Experiment (experimental design):a research method used to test hypothesesunder carefully controlled conditions designed to rule out the effects of any variablesother than the experimental treatment; elements of an experiment include random assignment of subjects to either the experimental or the control group,measurement of the dependent variable in both groups at the beginning of the experiment; application of the experimental or independent variable to the experimental but not the control group; measurement of the independent variable at the end of the experiment, and comparison of measures on the dependent variable for the pretestand posttest measurements for both groups;   due to the effect of the experimental treatment, larger differences between pretest and posttest measurements are expected in the experimental as opposed to the control group

Experimental effect: in an experiment, the measure of the impact of theexperimental treatment upon members of the experimental group; the experimental effect is measured as

Page 11: Glassory of Research

the difference in pretest and posttest scores in theexperimental as opposed to the control group

Experimental group: the group of subjects in an experiment who receive theexperimental treatment as contrasted to the control group whose members are not subjected to the experimental treatment

Experimental mortality: refers to the loss of subjects during the course of anexperiment; high experimental mortality undermines the validity of an experiment

Experimental treatment: in an experiment, this is the variable that is changed by the experimenter to see its effect on the dependent variable; also called theindependent variable or experimental variable

Experimental variable: this is another name for experimental treatment

Experimenter bias: any potential source of error introduced in an experiment in the way the experiment is designed, the way data are collected and analyzed, or howconclusions are drawn

Explanatory research: research undertaken to explain why certain behavior occurs; seeks to provide an explanation for why a relationship exists

Exploratory research: research carried out to learn more about a problem or topic; usually undertaken to collect data for designing a descriptive or explanatoryinvestigation

External validity: refers to the degree that the results of an experiment can be extended or generalized beyond the conditions of the experiment to conditions in the real world

F

Face validity: the characteristics of indicators that suggest they are a reasonable measure of a variable; example, questions about whether girls have the same right to education as boys would be reasonably valid indicators of attitudes toward gender equity

Field jottings: brief notes taken during an observation session to provide a basis for preparing more extensive field notes

Field notes: the full, detailed descriptions, sometimes based on field jottings, used to describe what occurred during an observation period; may also contain hypothesesand tentative explanations for what was observed

Field research: generally refers to qualitative research conducted in natural setting, as in a village or other public area

Filter question: see contingency question

Page 12: Glassory of Research

Findings: see results

Focus group: a group of persons organized by an investigator to obtain detailed information about a topic or issue through unstructured but guided discussion

Formative evaluation: an evaluation carried out during the development of a program; used to produce data for guiding the future development of the program

Frequency: the number or count for the occurrence of an attribute of an indicator orvariable

Frequency distribution: an ordered list of the frequencies or counts for all theattributes of an indicator

Frequency matching: a technique for creating equivalent experimental andcontrol groups based on randomly assigning the same number of subjects with similar specific characteristics (so many of one gender, age, ethnic group, etc.) to each group

Frequency polygon:   see line graph

G

Generalization: a statement based on the conclusions of a study that extends the conclusions to a broader or more general level

Generalizing: is the process, based on logic, for extending conclusions to a broader or more general level; generalizing may be done empirically, as when a statistic,based on a sample, is generalized to the population from which the sample was drawn or may be done theoretically by generalizing from results based on indicators to theoretical relationships among concepts represented by the indicators

Grounded theory: development of a theoretical explanation for behavior based on the analysis of data; this approach differs from the traditional deductive derivation of a hypothesis; grounded theory is used most often to generate explanations for behavior observed in qualitative investigations

Grouped data: continuous data that are combined into larger intervals or groups; example, instead of analyzing data for the exact ages of respondents, ages could be combined into five or ten-year intervals

Guttman scaling: a composite measure in which the scores for items indicate the expected pattern of responses

H

Page 13: Glassory of Research

Halo effect: in interviewing, the tendency to expect to receive a response in a certain (biased) way based on how previous respondents had responded; represents asystematic error in data collection

Hand analysis: analysis of data by hand counting; also referred to tallying responses

Haphazard sample: see convenience sample

Histogram: see bar chart

History effect: the influence of events on subjects during the course of anexperiment; example, an experiment to change attitudes toward some group could be invalidated by a major public event concerning the group in question

Home page: the initial screen or page shown when you visit a Web site; the home page generally has links to other pages on the site and to other related sites

Hypothesis: a tentative statement of an expected relationship between variables,usually derived deductively from a theoretical framework; hypotheses may also be based on an empirical findings or conclusions; hypotheses are confirmed (accepted) or disconfirmed (rejected), based on empirical data

Hypothesis testing: the process of obtaining empirical data to judge whether ahypothesis is confirmed (accepted) or disconfirmed (rejected); statistical tests are used in making this judgment

Hypothetical-inductive process: based on the combined use of deductive logic to derive a hypothesis followed by use of inductive logic to test whether the hypothesis is confirmed (accepted) or disconfirmed (rejected)

I

Independent (independence): the lack of a relationship between two variables;when no relationship is observed, the variables are said to be independent

Independent variable: the variable that influences the value of another variable (thedependent variable); in an experiment, the independent variable is the one that is manipulated by the experimenter; in an experiment, the independent variable is also called the experimental or treatment variable

Index: a composite measure consisting of two or more indicators assumed to be of the same level of intensity; the indicators may be selected because they represent different dimensions of the concept the index is intended to measure

Index score: the interim composite score assigned to mixed type responses as a step in deriving a final Guttman score for a set of items: see Guttman scaling

Page 14: Glassory of Research

Indicator: a variable used to measure a concept or one of its dimensions

Indirect relationship: see negative relationship

Inductive logic: a form of reasoning used in deriving conclusions from the results of an investigation; reasoning from the bits or separate pieces of data to a conclusion

Inequality signs (< and >): are used in reporting the results of statistical tests of significance to show whether the result produced a probability level of "greater than," shown as >, or "less than," shown as <, the .05 or .01 level of significance

Inferential statistical analysis: analysis used in conducting statistical tests of significance and for estimating parameters in a population from results obtained from a sampl e drawn form the population

Informed consent: the ethical practice of providing respondents or subjectsinformation about a study, particularly any risks involved, so they can make an informed decision about participating in the study

Instrumentation effect: any effect the process of measuring has on the dataobtained in an investigation; in an experiment, administration of the pretest could affect scores on the posttest, thus posing a threat to the validity of the experiment

Inter-analyst reliability: the degree to which the observations or ratings of the main investigator and one or more independent observers or analysts agree with one another; a high level of agreement indicates that the rating or coding categories have a high level of reliability

Internal validity: the degree to which the results of an experiment can be attributed to the effects of the experimental (independent) variable and to no outside variables

Internet: the set of telecommunication connections and standards for transmitting information for exchanging information and accessing Web sites from one computer to another throughout the world

Internet survey: a form of survey in which questions are posted on a Web site or sent by email to r espondents who reply by completing the questionnaire on the Web site or sending responses by email

Interpretation (of results): the process of saying what the results mean; the purpose of interpretation is to develop the conclusions of an investigation or to explain what was found

Interrupted time series design: a form of a quasi experiment based on one group, with no control group; the occurrence of some variable is compared over time before and after some event that is thought to have an influence of the variable; example, does a large increase in the tax on cigarettes cause a decline in sales; data for sales before and after the imposition of the tax would be compared to answer this question

Page 15: Glassory of Research

Interval: the range of numbers used for grouping continuous data

Interval measurement: based on an ordered set of categories where the intervals between the categories are assumed to be equal; the numerical values assigned, however, are not based on an absolute zero (examples, intelligence scores. scores on an attitude scale)

Interval sample:   see systematic random sample

Interview schedule: the set of questions used to interview respondents; today, the term questionnaire is used in place of interview schedule guide or schedule

Interviewing: the process of collecting data from respondents by asking questions and recording their responses: in structured interviewing, a questionnaire with a fixed set of questions is used; in unstructured interviewing, questions are asked informally and in any order, more in a conversational style with respondents

Intra-analyst reliability: refers to the consistency in recording observations or incoding data by a single investigator

Inverse relationship: see negative relationship

Item: a question or statement used in a questionnaire to obtain data about avariable

Item analysis: the process of determining the extent to which items used in acomposite measure are related to one another and how well each item contributes to the composite score; item analysis is used to assess the uni-dimensional orinternal validity of a items making up a tentative composite measure

K

Key informant: a well informed person who provides crucial information in aqualitative investigation; may also review an investigator's description and explanation of events for accuracy and validity; information obtained from key informants is often vital to the success of field research

Key terms: words or phrase used in conducting a search of a database or for identifying relevant Web sites; key terms are selected to represent all the ways aconcept may be expressed

L

Page 16: Glassory of Research

Level of confidence: estimate of the probability that a parameter lies within a specified range of values; example, a research might report a 95 level of confidence of that the mean for the size of households in a population lies between 8.25 and 10.13 persons

Level of measurement: refers to the characteristics of measurements used to collect data; there are four levels of measurement - nominal, ordinal, interval, andratio

Level of significance: the probability that the result of a statistical test could be due to sampling error; for example, a result said to be significant at the .05 level indicates that the result could have occurred due to chance variations among sample less than 5 times out of 100 random samples of the same size from the target population; at the .01 level of significance, the result would be considered as occurring due to sampling error less than 1 time out of every 100 samples

Likert scale: a composite measure based on a set of responses that range from one extreme to another; example, a scale may have a number of items with response ranging from strongly agree, agree, uncertain, disagree, or strongly disagree

Line graph: graphic way to present data in which the frequencies for attributes of avariable are represented by dots at the intersection of the attribute, as arranged along the X axis of the graph, and the values for frequencies, listed along the Y axis; the dots are then connected by a line which creates a line graph; also known as a frequency polygon

Line of best fit: in a graph, shows the relationship between two variables; the line of best fit is the line that comes the closest to the largest number of dots representing the values for each pair of attributes for each respondent

Link: a connection provided on a Web site to other pages on the site or to a related Web site

List of references: the list of the publications, Web sites, or other sources of information cited in a report; references are prepared according to rules and listed alphabetically by the last name of the author; the list of references is placed at the end of the report

Longitudinal design: a research design used to measure changes in variables as they occur; data are obtained through successive waves of data collection from the same sample over a period of time

M

Matrix format: a table format for presenting items that vary in content but all have the same response categories; used frequently in presenting items asking about attitudes or views about some topic or issue

Maturation effect: any naturally occurring processes over time that may produce changes in subjects in an experiment; as people grow older, they change in many ways; thus, maturation is a threat to the validity of experiments conducted over long periods of time

Page 17: Glassory of Research

Mean: one of the three measures of central tendency; the value of the sum of a set of scores divided by the number of scores; in everyday communication, the termaverage is used to indicate the mean

Measure: an indicator or set of indicators used to obtain data for a variable; also referred to as a measuring instrument

Measurement: the process of assigning numerical values or qualitative descriptions toattributes of an indicator or variable

Measurement error: the difference between the true value for an indicator and itsobserved value; the observed value is almost always different from the true value because of systematic and random errors that occur during data collection and analysis

Measuring instrument: see measure

Median: one of three measures of central tendency; the median is the middle scorein a distribution

Mixed types: in Guttman scaling, mixed types are response patterns that do not match the expected pattern of responses; mixed types represent errors and reduce the coefficient of reproducibility, which is the measure of success in creating a Guttman scale

Mode: one of the three measures of central tendency; the mode is the most frequentscore in a distribution

Mortality effect: refers to the loss of subjects during the course of an experiment;high mortality a threat to the validity of an experiment

Multimethod research: an investigation using more than one method of collecting data; for example, an investigator may collect data on the same variables by means of observation, use of a survey, and analysis of available data

Multiple measures, before and after design: a quasi-experimental design in which data are obtained for a dependent variable from an experimental group and a nonequivalent control group at several or more time before and after an event; pre- and post-event   data for the two groups are compared to see if the event had any effect on the dependent variable

Multistage sampling: see cluster sampling

Multivariate analysis:   the simultaneous analysis of data for three or morevariables; may be done in the form of tabular analysis or using statistical tests

N

Natural setting: any setting where people carryout normal, everyday activities; examples, life in the home, village, office, or other public places

Page 18: Glassory of Research

Navigating: using navigation buttons and other aids to easily move among pages of a Web site

Navigation buttons: buttons or aids on a Web site one can click on to move quickly from one page of the site to another

Negative relationship: a relationship between two variables in which changes in one variable are associated with changes in the opposite direction for the other variable; example, years of schooling and fertility are negatively related; as schooling increases, fertility tends to decline

Negatively skewed distribution: a distribution in which most scores are located near the low end of the distribution

Network sample: a nonprobability sampling technique in which respondents who are initially contacted are asked to identify other members of the target populationfor inclusion in the investigation; example, in a study of female entrepreneurs,   the first entrepreneurs who were interviewed would be asked to name other females entrepreneurs they know who would then be contacted, interviewed, and asked to identify additional female entrepreneurs to be included in the sample, and so on; also called chain or snowball sampling

Nominal measurement: the lowest level of measurement; consists of giving names to categories or the attributes making up an indicator; nominal measurement simply indicates that the categories differ; for example, male and female, are the categories or attributes of the variable of gender

Nonequivalent control group design: a form of quasi-experimental design based on use of a control group that is thought to be similar to the experimental groupbut whose members were not selected by random assignment

Nonlinear relationship: see curvilinear relationship

Nonprobability sampling: - any form of sampling not based on random or chance selection of the members of the sample

Nonreactive measure: see unobtrusive measurement

Nonsignificant: any result judged to be within the range of chance variation that occurs from random sampling

Normal distribution: a distribution with a distinctive bell shape and that has certain specific properties; the most important for researchers is that approximately 68% of the scores in a normal distribution lie within ±1 standard deviation of the mean of the distribution, approximately 95% lie within ±2 standard deviations, and over 99% lie within ±3 standard deviations

Null hypothesis: a hypothesis established as a basis for conducting a statistical test of significance; the null hypothesis states that no relationship exists between two variables in the population from which a random sample was drawn; the null hypothesis is accepted or rejected, depending on the level of significance of result of the test

Page 19: Glassory of Research

Number: refers to the size of a sample or the frequency for the number of cases in an analysis

O

Objectivity: the ability to observe or reason without personal bias; while objectivity is virtually impossible to attain in all aspects of research, it is an ideal scientists strive to achieve

Observation: is the process of using one's senses to perceive and record information about some aspect of the natural world; social scientists observe human interaction and behavior

Observational design: a flexible plan for conducting observations; usually the basis of field research

Observed value: the value for an indicator obtained as a result of measurement orobservation; this is the value we know and almost always differs from the true valueof the indicator because of random or systematic errors in data collection

One group, pre- and posttest experimental design: a quasi-experimental design based on a single group, with a pretest measurement of a dependent variable, followed by an experimental treatment and then a posttest of the dependent variable; this design is subject to all the threats to internal validity

Online: refers to connecting to the Internet, databases or other computer-based sources of information by means of a computer

Open-ended items: questions where the respondent answers in his or her own words; a question is followed by blank space where the response is recorded or written; there are no response categories as they are with closed items

Operational definition: the definition of a concept as expressed by the way it is measured; the operational definition of social status, for example, is given by the item or items used to measure it

Operationalization: the process of developing measurements for indicators

Ordinal measurement:a measurement based on ranking or ordering of theattributes of a variable according to some criteria; level of schooling is an example of an ordinal measure - no schooling, primary level, secondary level, post-secondary level

Over-generalization: a statement or conclusion that goes beyond any supportingfindings or results

Over-generalizing: the act of drawing a conclusion that is not supported by data;example, claiming that most men in a town prefer a certain political candidate when data were collected only from men who had attended a college or university and who represent a minority of men in the town

Page 20: Glassory of Research

P

Page: a section of a Web site containing information on one of the topics or issues covered by the site

Panel design: a research design based on successive data collection from the samesample to measure changes in variables as they occur; panels are used inlongitudinal research

Parameter: the value of any indicator in the target population; an enumeration orcensus produces parameters; generally we can only estimate parameters fromstatistics that summarize data from a probability sample taken from the target population

Participant observation - a qualitative research technique in which the investigator participates substantially in the activities of a group; used   to develop an in-depth understanding of the behavior of the group and to see things as members of the group do

Participatory rural appraisal (PRA): an approach to data collection in which respondents are encouraged to participate fully in all phases of the research; is similar to and employees many of the features of Rapid Rural Appraisal

Paste: the process of adding material taken from a database, Web site, or other source to the document you are writing

Percent: a proportion multiplied by 100; literally means per 100; example, if 13 workers out a workforce of 170 were absent on a given day, the proportion absent is 13/170 or .076 and the percent is .076(100) or 7.6%

Perfect relationship: a perfect relationship occurs when a coefficient of correlationequals - 1.00 or + 1.00; this means that a certain amount of change in one variable is associated with a specific amount of change in the other variable; in physics, pressure and volume are perfectly related, an increase in pressure is always associated with a decrease in volume; in the social sciences, perfect relationships are seldom found

Personal interviewing: refers to the process of collecting data in face-to-face contact with respondents as opposed to conducting telephone interviews

Pie chart: a graphic presentation of results in which the slices of a circle (the pie) represent the proportions of each attribute of a variable

Population: the entire group of persons or other cases of interest to an investigator; the group to which an investigator may want to generalize from the sample used in an investigation

Positive relationship: a relationship between two variables in which they change in the same direction; as one increases or decreases in value so does the other; also called a direct relationship

Page 21: Glassory of Research

Positively skewed distribution: a distribution in which most scores are toward the high end of the distribution

Posttest measurement: in an experiment, measurement of the dependen t variable taken at the end of the experiment

Precision matching: a technique for establishing equivalent experimental andcontrol groups by randomly assigning subjects with exactly matching characteristics to one group or the other; example, for two persons of the same gender and age one is randomly selected for the experimental group, the other to thecontrol group; this process would be repeated for each set of persons with matching characteristics; precision matching is the strongest basis for creating equivalent experimental and control groups

Precoding responses: the process of assigning numbers to represent the attributesof an indicator at the time the response categories are created; pre-coding is frequently used with responses for closed items

Predictive validity: a way of estimating the validity of a measuring instrument,such as a scale, based on the association of scores on the instrument with a scores for some variable taken at a later time; example, the accumulated grade point average of university students could be used to validate a test for university success given while the students were in secondary school

Premature closure: occurs when one draws a conclusion based on insufficient data

Pretest: a test to see if a questionnaire is ready for use in a survey; generally based on selecting a small sample similar to the one to be used in the actual survey; all the elements of the questionnaire are tested, from the introduction to analysis of responses to obtained

Pretest measurement: in an experiment, measurement of the dependent variable at the beginning of the experiment, before the administration of theexperimental variable

Pretesting: see pretest

Print out: the copy of materials printed from a data file, Web site, or a file stored in a computer

Probability level: refers to the extent to which the results of a statistical test of significance could be due   to random variation that always occurs in sampling (calledsampling error); two probability or "p" levels are typically used in reporting results - the .05 and the .01 levels; the .05 level indicates that the result could have occurred due to chance 5 less than 5 times out of every 100 samples; the .01 level indicates that the results could be due to chance not more than once in every 100 samples

Probability sampling: any method of sampling based on random or chance selection, where each sampling element or unit has an equal chance of being selected

Page 22: Glassory of Research

Probe: a technique used in interviewing to encourage a respondent to provide a clearer or more complete response

Proportion: a fraction or part of something, expressed as a decimal between 0 and 1.0; example, the proportion of females at a university with 750 females and 4,500 males is 750/4,500 or .166 or .17

Proxies: easily developed substitutes for more precise forms of measurement;proxies are frequently used in Rapid Rural Assessment to provide data quickly and inexpensively in place of indicators that take longer to develop and test for validityand reliability; example, the size or construction materials used in a house could be used as a proxy for family wealth

Purposive sampling: a nonprobability method of selecting a sample based on selecting respondents because they are uniquely able to provide needed information; example, to learn about how decisions in villages are made, an investigator might select samples of village leaders and elders

Q

Qualitative analysis: examination of data in the form of verbal descriptions rather than numbers; the purpose of qualitative analysis is to describe behavior and provide an explanation for what was observed

Qualitative interviewing: a loose, flexible approach to interviewing based on exploration of topics that are discussed in depth with respondents; respondents are encouraged to talk at length about issues presented by the interviewer

Qualitative research: a flexible approach to data collection, based mainly on written descriptions of observed behavior; casual and participant observation andunstructured interviewing are the main ways of conducting qualitative research

Qualitative survey: a survey based on use of an unstructured questionnaire; the interviewer uses a conversational style of interaction with r espondents to get responses in the respondents' own words and with emotional content

Qualitative variable: a variable described in words or by the names of the categories of which it is composed as opposed to a quantitative variable, which ismeasured in numbers; gender is an example of a qualitative variable

Quantitative analysis: analysis of data in the form of numbers; begins with the analysis of each variable, one at a time (univariate analysis), and may proceed tobivariate and multivariate analyses

Quantitative research: based on numerical measurement of indicators; used to establish quantitative relationships among variables

Page 23: Glassory of Research

Quantitative variable:   a variable that is measured in numbers as opposed to aqualitative variable, which is not; the number of faculty of a university is a quantitative variable

Quasi-experimental designs   are based on some but not all the features of theclassical experiment; most quasi-experiments lack complete control over theindependent variable, but they have the advantage of estimating the effects ofvariables under real social conditions; quasi-experiments may be low on internal validity, but are often high on external validity

Questionnaire - a set of carefully phrased and tested questions or items prepared for the collection of data; surveys are based on use of questionnaires

Quota sampling: a nonprobability method of sample selection based on setting quotas for cases from defined components of the target population; once the criteria and quotas are set, convenience or other nonprobability methods are used to select the sample; quota sampling has the advantage of at least including sample elements from various segments or components of the target population

R

Random error: any form of error that may occur in a particular instance during datacollection, coding, transfer, or analysis; examples, a poorly asked question, a misunderstanding in recording a specific response, or an made in coding data

Random selection: selection based on chance and chance along, with no human judgment or preference involved; can be accomplished using a table of random numbers or by selecting sampling elements by chance from a box

Randomization: a process of assigning subjects to either the experimenta l orcontrol group by chance

Range - a measure of the dispersion or variation among scores; measured as the difference between the lowest and highest score plus 1

Rapid rural appraisal: an approach to data collection using approximations, calledproxies, for measurement of indicators, that permits collection of data quickly and inexpensively; often used to help make decisions about the development or future directions of programs; includes an emphasis on the participation of local persons to the maximum extent possible in the conduct of the investigation; also known asparticipatory rural appraisal

Rank order: the result of arranging scores in descending order from the highest to lowest; the highest score is given a rank or 1, the next lower score is given a rank of 2 and so on; rules are followed for assigning tied scores

Rapport: is the feeling of trust and confidence an interviewer seeks to establish and maintain with respondents

Page 24: Glassory of Research

Rate: a measure of how frequently something occurs within the limits of largerpopulation; example, the birth rate is the number of babies born within the population of an area; in social research, rates are expressed in terms of a standardizing base to eliminate differences in the sizes of the populations being examined; a base of 1,000 is used for calculating birth rates; thus, if 55 babies were born in a region with 2,400 persons, the birth rate would be 55/2,400(1000) or 22.9

Ratio: the relation between two frequencies; a ratio is found by dividing one frequency by another; in social research ratios, like rates, are generally expressed in terms of a standardizing base of 100, 1,000, or some other base; example, using a standardizing base of 100, the ratio of females to males in a university with 750 females and 4,500 males is 750/4,500(100) or 17; this days that there are 17 females for every 100 males at the university

Ratio measurement: the highest level of measurement, based on a real zero point; thus, any number is a ratio of any other number; for example the age of 40 is twice as large as the age of 20 by a ratio of 2

Raw data: the original data obtained by some form of data collection before data arecoded or modified in any way

Reactivity: occurs when the process of measurement   influences the resultsobtained; knowing they are being observed, persons, for example, may act differently than they would in normal situations; in that situation, measurement would bereactive

Record:   has two meanings: (1) the written description of observations made during a session in the course of field research; and (2) the part of a data file, such as a set of data for a single respondent or the full description of a document retrieved from adatabase

Reference: a description of a source cited in a report, such as a book or journal article, prepared in a specified fashion or format

Reliability: is the degree to which an indicator produces essentially the same result with repeated measurements

Respondents: the individuals from whom data are obtained, usually by means ofinterviewing or by completing a questionnaire

Response rate: the percentage of successfully completed interviews or self-administered questionnaires over the number that was expected to be completed; the latter usually is the size of the selected sample

Response set: the tendency of respondents to answer questions or items in way they answered previous questions; to avoid response set, positive and negative items are mixed up in any set of items, making the respondent think about each item before answering

Results: what is discovered when the data are analyzed; findings represent the answer to the question being investigated; also called findings

Page 25: Glassory of Research

Review of the literature: the process of reading research reports on a topic of interest; learning about the results of research on a particular problem or topic

Rounding: the process of establishing the last digit in a number derived from a calculation; rules for rounding are given in Chapter 17, Box 17.1

S

Sample: a part of target population; samples are selected by either probability ornonprobability methods; with probability samples we can generalize results from a sample to the target population; this cannot be done with nonprobability samples

Sample design: the plan prepared for the selection of a sample from a target population; the simple random sample is one kind of sample design

Sample frame: a list of the sampling elements or units comprising a target population

Sampling element: a single member or unit of the target population; example, a single member of the full time teaching faculty of a university in the spring of 2003; also called a sampling unit

Sampling distribution (of the mean): a distribution of means that could be calculated for all possible samples of a given size that could be drawn from apopulation

Sampling error: the error in measuring a variable that occurs because of variations due to random selection of samples; when random samples are used, the amount of sampling error can be calculated and used in estimating population parametersand in conducting tests of statistical significance

Sampling interval: the ratio of the size of the sample to the size of the target population; used as the basis for selecting a systemic or interval sample

Sampling unit:   see sampling element

Scale: a composite measure based on multiple items of varying intensity; used for measuring beliefs and attitudes

Scale types: in Guttman scaling, scale types are response patterns that match the expected set of responses

Scatter plot: a form of graphic presentation of relationships between two variables;each pair is represented by a dot at the intersection of the value for the attribute of one variable, as displayed on the X axis, and the value for the attribute of the other variable, displayed on the Y axis

Scientific inquiry: a way of examining the world around us based on logical analysis of what we learn through use of our senses

Page 26: Glassory of Research

Scientific method: the approach used in scientific inquiry to establish knowledge about the natural world, based on principles for identifying concepts, developinghypotheses, collecting and analyzing data to test hypotheses and generating findings which are incorporated into theories for explaining natural processes

Score: any numerical value used to represent an attribute of an indicator or somedimension of a variable

Scoring:   the process of assigning numbers to the attributes of a variable

Scroll: to move up or down the content of a page of a Web site

Search engine: a software program specially designed to allow persons to search the Internet to find Web sites of interest

Search service: see search engine

Search strategy: the plan developed for selecting relevant records from a database or to guide a search for Web sites

Secondary analysis: an investigation based on analysis of previously collected data;example, reanalysis of survey data collected by another researcher or further analysis of data available by a government ministry

Selective observation: the tendency to give extra emphasis to certain observations that agree with a preconceived position and to ignore observations that do not agree with the preconception

Self-administered questionnaire: a questionnaire designed for completion byrespondents without the assistance of an interviewer

Self weighted sample: a sample selected so that each segment represents its proportion of the population; using self weighting simplifies analysis of data from samples selected through successive stages, such as area, cluster, or multistage samples

Session: a period of observation as part of a field or observational study; also used to describe a period of time for the operation of a focus group

Significance level: see level of significance

Simple observation: observation of behavior, generally in a natural setting, in which actions are recorded in narrative form and later analyzed; also known ascasual observation

Simple random sample: a probability sample in each sample element has an equal chance of being selected

Page 27: Glassory of Research

Skewed: a distribution that differs greatly from a normal distribution; instead of most scores occurring near the mean of the distribution, most scores occur at the high or low end of the distribution

Snowball sample   - see network sample

Social distance scale: see Bogardus social distance scale

Social indicators: broad, standardized measures of the quality of life or other socio-economic conditions of geographic areas such as nations, metropolitan areas, or other areas; used to assess health conditions, educational levels, food availability, violence, and other conditions

Software package: see statistical analysis package

Spearman rank order coefficient of correlation: measurement of associationbetween scores for two indicators based their rank order instead of the original values of the scores

Split half reliability: a measure of the reliability of a scale or other measuring instrument based on the degree of association between two equivalent forms or halves of the scale; data for both forms are collected at the same time

Spurious relationship: a false relationship; a spurious relationship becomes apparent when the initial relationship between two indicators disappears after the effect of third variable is taken into account

Stakeholders: are individuals who have a strong interest in the outcome of anevaluation; in the evaluation of an educational program, stakeholders could include teachers, administrators, and parents, each of whom might have different expectations for the results of the evaluation

Standard deviation: a measure of variability among a set of scores; it is based on the sizes of the deviations of each score from the mean of the scores; in a normal distribution, approximately 68% of the scores lie within ±1 standard deviation, approximately 95% within ±2 standard deviations, and over 99% lie within ±3 standard deviations    

Standard error: see standard error of the mean

Standard error of the mean: the standard deviation of a sampling distribution; it shows how much sample statistics, such as a mean, will vary from one random sample to the next

Statistic: is any finding or result based on a sample; when probability samplesare used, statistics based on the analysis of data from the sample can be used to estimate the corresponding parameters of the target population from which the sample was drawn

Statistical analysis package (or program): a software program designed to analyze data stored in a computer

Page 28: Glassory of Research

Statistical inference: using the results of a statistical test of significance from asample to make an estimate about relationships among variables in a population;estimates are based on probability levels  

Statistical tests of significance: calculations conducted to determine whether differences between means or relationships between variables, for example, are within the range that could be expected due to chance variations that occur from onerandom sample to the next; statistical tests of significance are based on testing thenull hypothesis

Stratified random sample: probability samples selected from two or more sub-groups or strata of a target population; example, random samples of males and females drawn separately from the population of university students

Structured interviewing: interviewing based on the use of a questionnaire, which, in this kind of use is sometimes called an interview schedule; questions are asked in exactly the same way in all interviews; responses are recorded as given

Structured observation: a quantitative observation technique in which the observed behavior is recorded in terms of pre-established categories; tally marks are recorded each time the defined behavior is observed

Subject reactivity: in an experiment, describes the tendency of subjects to act differently than normal because they know they are being observed; a threat to theinternal validity of the experiment

Subjectivity: the tendency to form opinions or draw conclusions on personal grounds without sufficient regard for empirica l evidence; the opposite ofobjectivity  

Subjects: in an experiment, persons included in either the control or experimental groups

Summative evaluation: an evaluation carried out after a program has been fully developed; the purpose of a summative evaluation is to see whether the program has achieved the objectives set for it

Surfing: the process of moving from one Web site to another using addresses supplied by search engines or links on sites that are visited

Survey: a method of gathering data from persons, usually by means of getting them to respond to items comprising a questionnaire; in developing countries most surveys are carried out by interviewing persons

Systematic error: any kind of error that affects every case or a substantial number of cases in an investigation; for example, a poorly worded question that gives unreliable responses, a mistake in coding that affects all responses for that item

Systematic random sample: a probability sample based on selection of sample elements at a specified interval beginningwith a randomly selected first interval; using a sampling interval of

Page 29: Glassory of Research

10, for example, one would select the first sampling element randomly between the first and tenth element on a list and then every tenth element thereafter; also called an interval sample

T

Table: a way of presenting a large amount of data in very little space; tables can be used to display frequency distributions for one variable or to show bivarite ormultivariate relationships among variables

Tally sheet: a sheet used is recording the counts or tallys for the frequencies ofattributes of variables; example, male and female, as the attributes of the variable gender, could be listed as rows in a tally sheet and tally marks, such as ///, could be recorded for each time either attribute occurred

Tallying: the process of counting responses or other data by hand to developfrequency distributions

Target population: the specific, concrete population defined in terms of itssampling elements; abstract or general populations are converted to target populations by defining them precisely as possible such as the population of full time employees of a company on the first work day of a given month

Telephone survey (interviewing): the process of conducting a survey by means of telephone interviews  

Test-retest reliability: a technique for estimating the reliability of a measuring instrument based on the degree of association of between scores obtained at one time and those obtained at a later time; a high degree of association would indicate high reliability of the instrument

Testing effect: effects on the measurement of an indicator caused by the process of measuring the indicator; in an experiment, obtaining the pretest measurement can change how subjects respond to the posttest measurement of the same variable; a threat to the internal validity of an experiment

Theoretical framework: a set of theoretical statements used for deriving ahypothesis or for supporting an explanation for some behavior

Theory: the logical expression of relationships among abstract concepts; generally developed to explain a set of related behaviors or events

Time series analysis: analysis using data available for a number of points in time for the same indicator; can be used to establish trends or changes in social indicatorsor other variables

Page 30: Glassory of Research

Time series design: plan for data collection and analysis based on repeatedmeasurement for a variable at two or more times, such analyses are used to measure changes or trends in variables over time

Trend studies (designs): investigations undertaken to measure changes that have occurred in variables; data are collected for variables at two or more points and compared to see what changes or trend is found; trend studies may involve two or more points for data collection in the past or past data collection supplemented with data for the variable at the present time

Triangulation: collection and comparison of data from two or more sources or using two or more methods of data collection; triangulation is important inqualitative investigations to ensure that observations are accurately recorded and interpreted; an example would include collecting data for some indicators by means of observation, from interviewing several key informants, and by checking observations against available data

True experiment: a technique for testing hypotheses under carefully controlled conditions, where the experimental orindependent variable is administered to theexperimental group but not to an equivalent control group and measurements of the dependent variable are compared between the two groups following the experiment; also called a classical experiment

True value: the actual or real value of a score or other measurement; because ofrandom and systematic errors that can and do occur in research, we seldom know the true value of anything we measure

(The) t test: a statistical test to determine if the differences between two meansexceed the difference that could be due to sampling error

Two group, posttest only design: a form of quasi- experiment based on anexperimental group and a control group (often a nonequivalent control group) in which data are obtained only after the experimental variable has occurred; onlyposttest data are obtained and compared for the two groups

Type: any group or category of persons sharing a common set of characteristics that distinguish them from others; in social research, types are constructed by an investigator from data describing the special characteristics of respondents; example, an official may be classified as the "bureaucratic type" based on his or her obsessive attention to detailed rules and regulations and desire to please his or her superiors

Typology: a classification of persons or groups based on distinctive types created by the investigator for the purposes of analysis; typologies are useful as measures for dependent variables but are hard to interpret as independent variables

U

Page 31: Glassory of Research

Unidimensional: defining a concept so that it has only one dimension ormeasurable set of characteristics

Unit of analysis: the entity used as the basis for combining data for analysis; may be individuals, families, other groups, organizations, geographic areas, or other entities

Univariate analysis: analysis of a single indicator; univariate analysis is generally the first step in the analysis of a body of data; it is undertaken to describe each variable in terms of measures of central tendency (mean, median or mode) and variability (range, variance or standard deviation)

Universal Resource Locator (URL): the unique address of each Web site

Unobtrusive measurement: any technique of data collection that does not influence the results obtained; for example, observing how persons are dressed and using this as an indicator of social status, analyzing data already collected; also callednonreactive measures

Unstructured interviewing: a flexible form of interviewing, more in the style of a conversation; the interviewer adjusts the timing and content of questions to be asked and seeks to obtain full answers in the respondent's own words

Unstructured observation: observation of behavior or events as they occur, generally in a natural setting; the action being observed is described in narrative form;participant observation is a form of unstructured observation

Unweighted index: an index in which the indicators making up the index are assigned equal value

Unweighted score: responses to items are simply added to form a composite score; as distinguished from a weighted score in which responses to some items are given more importance by assigning a greater value to them

V

Valid frequency distribution: a frequency distribution based on the number of useable responses obtained for an indicator; example, if 68 respondents out of a sample 75 provided useable responses to an item, a valid frequency distribution would be based on an N of 68 rather than the N of 75 for the sample

Valid percentage distribution: a set of percentage based on the number of useable responses obtained for a variable; example, if 68 respondents out of a sample 75 provided useable responses to an item, a valid percentage distribution would be based on an N of 68 rather than the N of 75 for the sample

Validity: the extent to which an indicator measures a given concept or one of itsdimensions

Page 32: Glassory of Research

Value judgment: a statement or opinion based on one's beliefs or values and not onempirical evidence; Variable: any characteristic that varies; that can take on two or more numerical values or has two or more qualities; the various values or qualities of avariable are its attributes

Variance: a measure of dispersion or variability among scores in a distribution;variance is the mean of the squared deviations of each score from the mean of the distribution

W

Web address: see Universal Resource Locator

Web site: an electronic source of information accessible through the Internet, the worldwide telecommunication network and software that links computers to Web sites

Web survey: a survey conducted by posting a questionnaire on a Web site and inviting viewers to complete the questionnaire; also referred to as an Internet survey

Weighted index: an index in which the indicators making up the index are assigned different values to reflect the greater importance of some of them to thecomposite score

Weighted sample: in analyzing data from samples, assignment of different weights or values to cases in proportion totheir probability of selection

Weighted score: in constructing a composite score, the process of giving greater value to some indicators over others

Weighting indicators: the process of assigning greater importance to certainindicators over others in the construction of a composite measure

World Wide Web: the original name for the connections among Web sites, now preserved in the "www" in many Web addresses

Chapter 2. The Sudan Fertility Survey: An Introduction to Research

Introduction

In Chapter 1 you learned about the scientific approach to conducting research and the typical stages in the research process. In this chapter, we show how the research process was used in conducting the Sudan Fertility Survey, a large scale research project designed to provide information on an important condition affecting the future of the Sudan — its birth rate. As with all research, the Sudan Fertility Survey began with the definition of the problem to be investigated.

Page 33: Glassory of Research

Specifying the research question

In most research, the researcher decides what to investigate, as you will have to do in your initial study. Sometimes, however, researchers are asked to investigate a question for some organization, such as a government ministry. This is how the Sudan Fertility Survey occurred. The Department of Statistics of the government of the Sudan wanted accurate, detailed information of the current and the estimated future fertility rate in Sudan. In population research, the fertility rate is defined as the number of live births per 1,000 women of childbearing ages.

The resulting investigation became known as the Sudan Fertility Survey (Department of Statistics, 1982). We describe this study for four reasons:

1. To show the value of social research - the survey was requested by the government of the Sudan to provide information for developing family planning programs;

2. To illustrate the application of social research methods to an important social problem - that of high population growth;  

3. To show how research is planned and carried out in practice; and4. To give you an idea of how the results of research can be used to understand social

conditions in a country.

We begin by examining how the study was carried out because the value of the results depends on how information is collected and analyzed, and this depends on how well the study was planned in the first place.

Designing the study

All research projects require a design or plan for the collection and analysis of the data. In preparing a design for the Sudan Fertility Survey, a number of important decisions had to be made, one of which was who to study.

References begin with the name(s) of the authors(s). In the case of the Sudan Fertility Survey, the author is a government organization. The full reference to this report and others we cite later are provided in the List of References

Who to study?

Given the objective the of the study,   it was obvious that married women would have to be the source of the desired information.    In research terms, the women became therespondents in the study.   Their responses to questions they were asked became the dataof the investigation.   Incidentally, data are plural.   No one would base a study on the answer of a single respondent to a single question, which would produce a datum or just one bit of information.    In contrast, research is based on the collection and analysis of a body of data. The Sudan Fertility Survey, for example, was based on responses by more than 3,000 women to over 200 questions.   That's a lot of data.

With the decision made to collect data from married women, the researchers faced a new decision.   This was whether to collect data from all eligible women in northern Sudan or to limit data collection to some smaller number of women. All eligible women, those who were ever married and living in northern Sudan constituted the population being studied.    For the Sudan

Page 34: Glassory of Research

study, the population included over three million women, far too many to try to collect data from: Doing so would take too long and cost too much money. Knowing this, the researchers chose the alternative used in most social research. They selected only part of the population as the respondents for the study. This smaller set of women, called a sample,was selected so that the women in the sample were like the population in all important ways, such as being about the same ages, having the same levels of education, and having the same number of children. In Chapter 8 you will learn how samples are selected.

How to collect the data?

Next, the researchers had to decide how to collect the data from the sample of women. The method chosen was to conduct a survey based on personal interviews with each woman in the sample. With this decided, the investigators turned to developing the questions to be asked.   Stating the questions to be asked is a critical step in a planning a research project because, as in everyday life, the answer you get to any question you ask often depends on how the question was asked. Considerable care, therefore, was taken in framing each question. This task was made easier in the Sudan study because many of the questions used were used in previous studies of fertility in other countries.

Studies frequently require translation of questions into the language of the respondents. This was the case with the Sudan survey.   Questions, originally in English, were translated into Arabic, the language of the women who would be interviewed. This translation was    checked to make sure that the meaning of each question was not changed as a result of being translated. Checking was done by translating each question back from Arabic into English, and then by comparing the two English forms of each question. When the back translation agrees with the original language, the translation is considered safe to use. If the two forms differ, the process is checked to find the cause of the difference. In this case, English and Arabic versions of the questions were compared. Back translation, however, can be used with any set of languages.

After the researchers were certain that the translated questions asked what was intended, a small sample of women was interviewed to make sure that the women who would be interviewed in the main study would understand the questions and be able to answer them accurately.    Following this step, called a pretest, the questions were organized into aquestionnaire.    As the name implies, a questionnaire is the final set of questions used to collect data from a sample.

Persons called interviewers were then trained to use the questionnaire to interview each woman included in the sample. In conducting interviews, each respondent was asked each question on the questionnaire and her answers were recorded by the interviewer.

So far we have discussed the typical elements of a social survey. A survey is one form of social research. Generally, surveys are based on data collection from a sample using a questionnaire.

Collecting the data

The collection of data, in this case the process of interviewing the respondents, lasted from December, 1978, to April, 1979, and resulted in completion of 3,115 questionnaires from eligible women. Reporting the time period for data collection is expected in research reports because reports are frequently published years after data are collected.

Page 35: Glassory of Research

Therefore, it is important to tell when the data were collected. This is the only way readers can know how old the data are.

Analyzing the data

Current fertility

The purpose of analysis is to organize the data and see what was found. Analysis generally occurs in two phases. First, investigators summarize responses to each question. The central question of the Sudan Fertility Survey was how many children each woman had. Each woman was represented by a number, from zero for those who had not yet given birth to a child, to the maximum number born to any woman. These numbers represent the raw data for establishing the fertility rates in northern Sudan in 1978/79. The raw data were analyzed to find the average number of babies born to the women.   Two averages, in fact, were calculated. One average was based on all the women in the sample from whom data were obtained. This average was 4.2 children. It summarized the number of children born to all women, regardless of their ages or how long they had been married.   

Another average was calculated to find out how many children had been born to women who presumably would not have any more babies. For this average, only data for women who were 45 to 49 years of age were used. This average described the completed fertility of married women in northern Sudan. As you might expect, the average for completed fertility (6.2 babies) was higher than that for all married women. This result would be expected because the first average included data for younger women, some of whom had only been married for a short time, whereas the average for completed fertility included only women who had many years to produce children.

We cite these two averages to illustrate that a single research project can be used to answer more than one question. Chapter 18 and Chapter 19 you will give you some ideas of various ways you can analyze the data you will collect.

Estimates of future fertility

As researchers we often want to suggest how we think certain things may change in the future. The following examples show how data from the Sudan Fertility Survey were analyzed to get an idea of possible changes in fertility in northern Sudan.

First, the research team compared the number of babies born to younger women with the number older women had given birth to when they were the same ages as the younger women. The analysis showed that younger women were continuing to have about the same number of babies as their older relatives had at the same ages.

In addition, the researchers examined the number of children the women said they would like to have if they could have the exact number of children they wanted.   For all women, the preferred number was an average of 6.4 children, which was higher than the actual completed fertility of the older women (6.2 children).   Younger women between the ages of 15 and 24, however, indicated they wanted an average of 5.4 children, less than the 6.4 as reported by all women. These findings also point to continued high fertility in northern Sudan.

Page 36: Glassory of Research

The researchers also looked at the extent to which family planning was being practiced. The women were asked a number of questions about their knowledge and use of contraceptive methods. Here are some of the results:

Only 12% of the women had used contraceptive methods sometime in their lives: Of those who had tried some method, only 9% stated they intended to do so again in the

future; And only 16% of the women wanting no more children said they were using a reliable

means of contraception.

Seeking an explanation for fertility rates

So far, the results suggest that fertility will remain unchanged in northern Sudan. Women wanted and were still producing large families and few of the couples were using reliable means to limit family size. Still, before stating a conclusion based on these findings, we need to examine fertility in light of other broad social trends in Sudan. Chief among these is the recent increase in years of schooling among girls.

How might increased schooling be linked to fertility? To answer this question, the researchers analyzed the relationship between schooling and fertility. Here are some of the things they discovered:

Women with no schooling had an average of 4.2 children; Women with 1 to 5 years of schooling had an average of 4.4 children; Women with 6 or more years of schooling had only 3.0 children on the average.

These findings indicate that completion of primary school was associated with lower fertility.

The education of women was also related to use of contraceptives. As their schooling increased, so did the use of contraceptives:

Only 2.5% of the women with no schooling reported use of contraceptives; While 15.0% of those with 1 to 5 years of schooling did so: And an even larger percentage, 42.0%, of women with at least 6 years of schooling

indicated use of contraceptive methods.

Among women who wanted no more children, schooling was even more strongly associated with contraceptive use:

Only 8.5% of the women with no schooling and who wanted no more children reported use of contraceptives.

This was true for 29.5% of those with 1 to 5 years of schooling. A much larger percentage, 63.0%, of the women with 6 or more years of school and who

wanted no more children reported use of contraceptives.

Interpreting the results

From these findings, we could draw the conclusion that fertility in northern Sudan will not change much in the immediate future. In the long run, however, as schooling for girls continues

Page 37: Glassory of Research

to increase, fertility rates will probably decline. These conclusions would represent our interpretation of the findings. In a sentence or two, we say what we think the findings mean. To summarize: results are based on data; results are facts. Statements that give meaning to the facts or results represent the researcher's interpretation of the results.

Generalizing the results

When a proper sample is used, researchers can extend a conclusion by saying what they think is true for the population based on what was learned from a sample. Thus, the results from the sample of 3,115 married women who supplied data for the Sudan Fertility Survey could be extended to describe fertility and conditions affecting fertility among the 3 million married women living in northern Sudan at the time the data were collected. When conclusions are extended in this way they are referred to as empirical generalizations.Empirical is used because the generalizations are based on data. The process of creating a generalization is called generalizing.

Some empirical generalizations that can be drawn from the results of the Sudan Fertility Survey are:

Fertility in northern Sudan is high, averaging slightly over 6 children per married woman. Fertility in northern Sudan will probably remain high in the coming years. However, in the long run, fertility in northern Sudan will probably decline as females

obtain more schooling.

Notice that these generalizations sound like conclusions.   Often generalizations do, but remember, generalizations are offered as the broadest or most general statements one can make, based on the findings of a study. Researchers are careful in drawing either conclusions or generalizations.   Sometimes, because of limited data, we have to limit conclusions and corresponding generalizations.   The important thing is to be honest in what you say: Be careful not to over generalize or go beyond what your data indicate. For example, the three generalizations we stated earlier were limited to "northern Sudan." We did not try to generalize to all of Sudan because data were not available for other parts of the country.

Aids

Internet resources

In this chapter, we have presented an analysis of only one social research report. Thousands of additional research reports on all kinds of topics are available on Web sites or through other information services. Since this chapter dealt with a report on fertility, we did an Internet search using Google, a popular search service. Google reported about 203,000 Web sites dealing with "fertility rates." We also looked for reports of studies of fertility rates in POPLINE, an information service that covers population-related topics and issues. On February 9, 2005, POPLINE, listed 3,899 items concerned with fertility rates. Some of these were Web sites with the complete text of reports. For example, one report, Transitions in World Population, provides a comprehensive description of population changes, examines bases for future changes, and discusses other issues related to the changing characteristics of the world's population. Others provided summaries of journal articles, books, and other reports related to fertility. Chapter 4 provides explains how to construct and carry out a search of POPLINE.

Page 38: Glassory of Research

Either Google or POPLINE and many other information services (see Chapter 4) provide access to thousands of social research reports on all kinds of topics.

Key terms

Analysis Back translation Data Design Empirical generalization Generalizing Interpretation Over-generalizing

Population Pretest Raw data Respondents Questionnaire Sample Survey

Main points

1. Information collected in an investigation is referred to as data. Data are plural; the singular of data is datum.

2. Data are analyzed to produce the results of a study.3. Social scientists use data to establish relationships between variables. Clearly

established relationships between variables provide the basis for explaining why behavior occurs as it does.

4. Findings or results are interpreted to produce the conclusions of an investigation; to interpret findings is to say what we think they mean.

5. Conclusions are statements based on findings.6. An empirical generalization extends findings from a sample to a population.

Chapter 4: Selecting a Question to Investigate

Introduction

Selecting a question to investigate may be the hardest part of your initial research project. It is also a very important decision. Every other decision you will make in planning a research project will be based on what you decide to study. In this chapter, we offer some suggestions for working through this important process. Part of this process involves learning about what is already known about the topic you choose to investigate. In research, becoming informed about previous research findings is referred to as conducting a review of the literature. Therefore, we combine the process of selecting a question to investigate with the process of learning about previous research. Today, with the increasing importance of theInternet as a source of information, literature reviews include Internet searches in addition to looking for information in libraries.

The usual processes involved in selecting a question to investigate are outlined in Box 4.1.   

Box 4.1. Processes in deciding on a research question

1. Getting an initial idea: may be expressed as a "topic," "interest" or "problem"

Page 39: Glassory of Research

2. Evaluating the idea, topic, interest, or problem3. Conducting a comprehensive review of the topic or

problem4. Making a final decision on the research question

Your initial research question

Getting an initial idea

Research starts with getting an initial idea about something to investigate.   We use the word "idea" at this stage of the process to cover different ways you might start. You may begin with a "topic" or a "problem" that interests you.

Topics, problems, or questions for research can come to you at anytime and from a variety of sources. Course work is an obvious and frequent source of research questions. You may be stimulated by something an instructor says or by something you have read. Some research reports end with section entitled "Recommendations for future research." One of these recommendations may excite you or and lead to a problem you want to investigate.

Frequently, things mentioned as "problems" by friends or relatives or something you read about in a newspaper or magazine can be rephrased as a question for study. In addition, your own personal experience or interests may lead you to do research on a certain problem. A student from a religious or ethnic minority group may be motivated to investigate attitudes or behavior of the majority group toward the student's group. A student from a rural area may want to investigate ways of improving social services in his or her village.

You may find that recording ideas in a notebook as they occur to you is helpful. You can review these ideas periodically, cross out ones that no longer appeal to you, and keep others for consideration when you have to submit a topic or problem for your research requirement.

Evaluating the question

No matter how excited you may be about an initial research question you need to analyze the question carefully before committing yourself to it. Regrettable, sometimes one's initial question is not a good basis for research. A common flaw is that the problem or question is defined too broadly.   To illustrate, suppose a student became concerned about crimes involving women and decided to investigate the topic of "women and crime." We call this initial idea a "topic" because it simply describes a broad area for possible research. In this form, it is not a problem or a question, but it can serve as a starting point for specifying a question for investigation.

As the student got into this topic, he or she would soon discover that this topic covers a number of more specific topics.   These include:

What kinds of crimes are committed by women (theft, beer-making, prostitution, etc.)?

Which kinds of crimes are committed against women (beating, rape, etc.)?

Page 40: Glassory of Research

What variables are associated with criminal behavior of women (age, economic level, ethnicity, etc.)?

What kinds of punishment are given for each kind of crime (fines, imprisonment, etc.)?

Do punishments differ for women from different ethnic or social backgrounds?

What effect does punishment of women criminals have on the women? Their children? Their families?

Any of these questions could become the basis of a research project. As with most beginning researchers, you will probably go through this kind of process, starting with a broad topic, and then working toward a specific question that you can investigate. Some ways of doing this are listed in Box 4.2.

Box 4.2. Ways of converting topics into researchable questions

Discuss your ideas with fellow students. Ask your instructor or research advisor to review your ideas. Seek comments by experts; using the "women and crime"

example, you might ask police or judicial officials for their opinions.

Do a preliminary review of the literature on you topic. Try to state the topic or question in a clear, single sentence; if

you can, you probably have a good idea of what you want to investigate.

Practical considerations

In selecting a question to investigate, two other things to consider are:

Whether you can do the work required; and How excited you are about the project.

In addition to being too broad, many first-time researchers under estimate the work required to complete a research project. You can avoid this kind of mistake by realistically estimating the amount of time and effort that will be required to do your research. If you were conducting a survey, some questions to ask include:

How many hours will I have to spend collecting my data?How many respondents will I have to interview to have a sufficient number of interviews?How long will it take to locate the respondents?How much travel time will be involved?How long will each interview last?  Also, how much will data collection cost?Will there be costs for transportation to get to respondents?Will there be other costs, such as having to pay for meals while collecting data?Can I pay for these costs?

Page 41: Glassory of Research

How long will it take to organize and analyze my data?Can I complete the entire project in the time I have?

If you decide that the project will take too long or cost too much, you will have to reduce its scope or find a question you can investigate within your time and financial resources.

In addition, you should also ask yourself how excited you are about doing the project. Why am I interested in this question? How excited am I about it? Research is hard work. Even a small scale project can take many hours, weeks, and possibly months to complete. Being excited about your project and wanting to find answers to the problem you have selected can provide tremendous motivation to see the project to a successful conclusion. Your excitement, however, should not lead you to favor one set of results over others. Remember the norm of being disinterested in results. As a person, you can be interested in a certain problem and want to do research on it, but, as a researcher, you will want to do the research as honestly as you can and be ready to accept whatever results occur.

As you think about what you want to study, you will probably reject some ideas and begin to focus on a topic or question that interests you more than others. At this point, you will want to learn all you can about your area of interest. Researchers do this by conducting a review of the literature.

Reviewing the literature

Three planning activities will produce better results and save you a lot of time in conducting your literature review. These are:

Defining the scope of the literature review; Identifying relevant information sources; and Preparing to record information you will find.

Finding information for a research paper is a research undertaking in itself. Several Web sites show ways to get started. Take a look at Research Helper or Steps in Research and Writing Process, Our site and the two just mentioned describe the typical steps involved. How you use these steps will depend on your topic and the kinds of information sources available to you. Be resourceful: adapt the steps described to your needs.

Defining the scope of the review

This step is necessary because a given concept or variable may be expressed in different ways in various reports you will read. A common variable, such as gender, for example, may be referred to as gender, as the sex of the person, or in terms of male-female ormasculine-feminine relations. If you were doing research on gender roles, you would need to look for publications that contain any of these words in their titles. To do this, you would need to make a list of all possible words, terms or phrases that might be used to refer to gender and the roles of males and females. Your list of terms will define the scope of the relevant literature.

Frequently new variables are discovered during the review process. These variables and all the related terms you can think of should be added to the list of terms used in defining the scope of publications you will want to review. At some point, you have to decide how far you want to go in

Page 42: Glassory of Research

looking for similar or related terms. A practical guide is stop looking when you no longer find any new terms.

Identifying information sources

For students in many developing countries, finding research information is difficult. Most libraries have limited collections of books and journals. Still, some information on a wide range of topics can be found in most countries. Four sources you will want to explore are:

Your university library Nearby libraries Offices of faculty Internet-based sources of information

Using your university library. Your university library has two important resources: (1) materials, including books, journals, encyclopedias, and other materials; and (2) reference specialists who can be of immense help in finding information you need.

On your own, you can check to see whether your library has books related to your interests. This is a good way to start any review of literature. Today, most catalogs of university library holdings can be viewed online from a computer terminal. If your university library has an online catalog, searching for titles of books related to your research question is relatively easy - certainly easier and faster than using a card catalog. You can enter the words or phrases from your scope of interest and see if your library has books related to these topics. If your library is not automated, check the card catalog to identify books based on the main terms used in the catalog.

Getting help from library reference staff. The reference staff of your university library may be your best information resource. These helpful information specialists can lead you to useful information you would otherwise not know about. In the social sciences, for example, most research is published in professional journals. Librarians will know whether your university library has journals that may contain information related to your problem. If not, they may be able to tell you whether a nearby library has the kind of information you are looking for. Also, ask if your library has encyclopedias that may contain articles on topics related to your interest. Some of these may provide useful summaries of literature in your area of interest.

Reference librarians can also suggest other sources of information, such as books, government reports, and other research reports on file at the library. Also, ask your librarian about journals or books that contain only reviews of literature. Many reviews on various topics are published each year. These will be the single most valuable source of information you can find, provided the article is related to your problem. A good review article lets you start with much of what is already known about some topic and provides references to publications   you will want to read.

Using nearby libraries. Other libraries near your university may be worth exploring. The library of another university may have materials your library does not have. Many government ministries and research centers maintain libraries that contain not only all the public documents of the organization, but also reports of international organizations and those from similar foreign organizations. For some research topics, these specialized libraries may be the best sources of information in your country.

Page 43: Glassory of Research

Offices of faculty. Faculty members often have their own collection of books in their area of interest. Instructors you have had for various social science courses may have books that contain reports of research related to your interest. It doesn't hurt to ask, and, if you do, you may find information you can use.

Preparing to record information

Prior to starting your literature review, decide how you will record notes. Some students scribble notes on scraps of paper with no consistent record of what they have read. Later, they discover they cannot figure out what they had scribbled down. To avoid this problem, we suggest using a single sheet of paper to record information you take from a publication. Using the same size of paper for all notes will also help you organize your notes. In addition, we suggest using a consistent set of categories for recording notes. This will help ensure that you get all the information you need at one time, and save you from having to find a publication a second or third time to get information you need for writing your research report

The most important information you will want to record on your note cards are the findings or conclusions of the reports you read. You will also need to copy information for preparing areference for each publication you will cite in your final report. A reference includes the names of the authors, the date and title of the publication, and other information we describe later in this chapter. Other information you may need in writing your report includes the statement of the problem or the research question included in the publications you select, and a brief description of the design used (survey, experiment, observational study), the kind and size of the sample used, and other points that may interest you.

Box 4.3, below, illustrates use of a note card. The categories for recording information are shown in bold. These categories are to remind you that information on these points is important and to help you get consistent information for all publications you read. To illustrate use of a note card, we have entered notes we took for a publication we used in reviewing literature for writing this book.

Box 4.3. Example of a note card

Reference: Rahama, A. (1997). Gender roles in crises situation: the case of the famine of 1985/85. The Ahfad Journal: Women and Change. 14:2, 4-15.

Problem/question: What coping strategies were used by individual households in northern Kordafan for surviving the famine of 1984/85?

Design: A multimethod approach was used, including a survey supplemented by information obtained from participant observation and from the analysis of available records and secondary data.

Sample: The survey was based on a sample of 120 women selected from three ecologically different areas of northern Kordafan; women living in their villages, women who had migrated to shanty towns, and those who had migrated to relief camps.

Findings/conclusions: The overall conclusion was that women's roles were crucial to the survival of their families; also, roles changed greatly for both men and women. In the initial stages of the crisis, women used their knowledge of local plant

Page 44: Glassory of Research

species to supplement the food supply for their families; women also prepared only two meals a day, used cereals to supplement other food, and rationed what food was available; men migrated with their animals and began selling animals to get money for food. As the crisis continued, selling assets by both men (animals and handicrafts) and women (gold and handicrafts) became more common; migration continued. In the final stages of the crisis, migration was combined with entry of females in the labour market, resulting in changed roles for men and women; women engaged for the first time in paid labor while men stayed home and cared for the children. A final effect of the crisis was that 63% of the women became household heads.

Finding information and taking notes

In the process of identifying potentially useful sources of information, you will discover articles, material in books and research reports, and possibly information on Web sites. As you start the actual review process, your first task is to decide which sources to examine in detail. We offer the following guidelines for getting the most useful information in the least amount of time:

Focus on material directly related to your research question Learn to read critically: look for materials that tell you something new about the topic you

are investigating Take careful, comprehensive notes Prepare a complete, accurate reference for each source used

Focusing on relevant material

Some students think a good literature review is based on reading everything they get their hands on. This can lead to a great waste of time and prevent you from finding and learning from really useful material. Before starting, therefore, it is a good idea to set some limits on what you are going to try to get. The scope you defined for your literature search provides a useful set of limits. Look first for material that relates to the central concepts or terms in your scope of interest.

Second, decide how far back in time you intend to search. In some fields, research moves rather quickly, making research done five years or so earlier relatively useless. Start with the most current publications you can find and work back until you don't find anything new or particularly valuable. Then stop.

When you find a publication that looks promising, first read the abstract, if the publication has one. An abstract is a short summary usually found at the beginning of an article. If an article does not have an abstract, turn to the   summary at the end of the articles and read it. Then read the introduction or the review of literature section to see how the author says the report fits into the broader literature. After all, you might as well benefit from the reviews conducted by previous writers. If these sections indicate the report has information related to your interest, then read the whole report critically and take notes on the points most valuable to you.

Learning to read critically

The key to analyzing publications and to preparing a review of literature is to read critically. Publications vary greatly in quality. Many publications meet the highest standards of research

Page 45: Glassory of Research

and scholarship: Unfortunately, others do not. With relatively little research experience, you may find it hard to distinguish between good quality reports and ones of lesser quality. By the time you finish this book and your course on research methods, you will be better able to judge the quality and value of reports. In general, if you suspect that the methods of data collection and analysis are weak, you can probably disregard a report.

As you read, be alert to references to other publications. For new references, first examine the title and decide whether you think it might contain information useful for your study. If you think it does, copy down the reference on a note card for use in finding the publication. Continue this process until you run out of new leads. Also, try to learn as much as you can from each publication you read. The authors of each report you read dealt with the same things you will face in planning and carrying out your study. As you read, watch for points listed in Box 4.4. Learning how authors handled these points can help you at every stage in your own research, from stating your research question to writing your report.

Box 4.4 Points to look for in reading a research report - how authors:

Defined and presented the problem they were investigating

Organized their reports (what was the order of the various parts)

Described the design they used Described their sample design and actual sample Presented and analyzed data Reported and interpreted their findings Derived and presented their conclusions Presented references to publications they cited

Taking notes

Using the headings (in bold type) in Box 4.3 as a guide, record your notes.   How extensive note-taking might be will depend on how valuable you think the information from any publication is to you. Once you have a publication in hand, recording comprehensive notes for later use is a lot better than going back to the library or other sources a second or third time to get missing data.   Sometimes, in the haste of getting through publications, students fail to write clearly. At the time, this may seem like a timesaving action, but sloppy note taking often leads to later frustration and a waste of time. Frequently, notes originally taken to help define the research question, are used again when the report is being written, and this can be months after the notes were taken. By then, sloppy notes may be difficult to read, causing considerable frustration and even delay in completing a report on time.  

As part of note-taking, you will need to create a reference for each publication you intend to cite in your report.

Preparing references

Page 46: Glassory of Research

The reference to a publication gives readers of your report the information needed to find a publication you cited. Before recording references, you will want to check with you instructor to see if your university has a preferred way of preparing references. If it does, you will want to follow the format required by your university. If your university does not have a preferred format, you may want to use the formats shown in Box 4.5 or another of the widely used styles for references.

A reference is composed of elements. As you see in Box 4.5, the last names of authors are listed first for all kinds of references, followed by the initials of the authors, each of which is followed by a period. For materials having a second or additional author, the last names of the additional authors are listed first, followed by their initials. The abbreviation for "and," shown as "&," is used between the next to last and the last author. Next comes the year of publication, set off in parenthesis and followed with a period. Sometimes you may have to refer to more than one publication by the same author in the same year. When this happens, add "a," "b," "c," etc., to the year, as (1999a), (1999b), or (1999c), to differentiate among publications by the same author or authors in the same year. From this point on, elements of references differ, depending on whether you are citing a journal article, book, or something else.

In Box 4.5, we provide examples of the most commonly used reference styles. For publications with features we don't show, the following site may help:

Formatting

Box 4.5. Examples of commonly used references

Journal articleGhorayshi, P. (1996). Women in developing countries: methodological and theoretical considerations. Women and Politics, 16, 89-109.

Grotberg, E. H., & Badri, G. (1986). The effects of early stimulation by Sudanese mothers: an experiment. The Ahfad Journal: Women and Change,3:2, 3-16.

BookLobban, C. F. (1987). Islamic law and society in the Sudan. London: Frank Cass.

Chapter in a bookStyos, J. M. (1983). Sample surveys for social science in undeveloped areas. In M. Martin & D. P. Warwick (Eds.), Social research in developing countries: surveys and censuses in the third world. New York: Wiley.

DissertationDarkoh, C. A. A. (1994). Women's roles and social change in Sudan. Unpublished doctoral dissertation, Iowa State University, Ames.

Unpublished reportDepartment of Statistics, Ministry of Planning and the National Economy, Republic of Sudan. (1982). The Sudan fertility survey. Voorburg, The Netherlands: International Statistics Institute.

Online documentDarden, L. (2003). The nature of scientific inquiry. Retreived on February 4,

Page 47: Glassory of Research

2003, University of Maryland.

For a journal article, the title of the article comes after the year of publication. Only the first letter of the first word is capitalized: lower case is used for all the rest of the title. The exception is that proper names (of persons, organizations, countries, etc.) are capitalized. A period is placed at the end of the title of the article. The title of the journal that contained the article is listed next. The first letters of main words in the title are capitalized and the journal title is placed in italics. A period is placed at the end of the journal title. The last element in a journal reference contains the volume number, issue number (when used by a journal), and the page numbers of the article being cited. When the issue number is used, the volume number appears first, followed by a colon, then the issue number, followed by a comma, and then the set of page numbers for the article, beginning with the number of the first page of the article, followed by a dash, and then the number of the last page of the article, and ending with a period. Information for creating a journal reference can usually be found on the on the cover of the journal; on the first page of an article; or in the table of contents of the journal containing the article you are citing.

The second reference in Box 4.1 shows how to prepare a reference with two authors. Two more authors of a book or any other publication are listed in the same way as shown for two authors of a journal article.

Reference to a book

The third reference in Box 4.5 shows how to prepare a reference to a book. As with a journal reference, the names of the authors are listed first; then comes the year the book was published, set off in parentheses and followed by a period. The next element is the title of the book. The title is placed in italics with only the first letter of the first word in the title is capitalized. A period of placed at the end of the title.   When punctuation, such as a comma or question mark is used in a title, include it exactly in the same way in your reference. The next element is the name of the city where the book was published, followed by a colon, and then the name of the publisher of the book.   The end of a reference to a book is marked by a period.   The date, the city of publication, and the name of publisher usually are listed on one of the first pages of a book.

Reference to a chapter in a book

The fourth reference in Box 4.5 describes a chapter authored by J. M. Styos that appeared in a book that was edited by M. Martin and D. P. Warwick.. The only new elements in this reference are the addition of the word "In" to show that the chapter is "In" a publication put together by other persons. The names of the editors are listed next in their nature order, as M. Martin and D. P. Warwick. Their role as editors is shown by the abbreviation of "Eds." shown in parentheses. The rest of the reference for a chapter is the same as for any book.

Reference to a doctoral dissertation

As the fifth example, we show a reference to a dissertation submitted by Cecilia A. Adae Darkoh to Iowa State University, where she was enrolled. For doctoral dissertations or master's theses, the title is in normal type (not in italics). The words, "Unpublished doctoral dissertation," are

Page 48: Glassory of Research

added, followed by the name of the university and the city in which it is located. A reference to a master's thesis is the same except that "Unpublished master's thesis" would be used at the end.

Reference to a research report

Many government ministries and other organizations publish research reports. The report of the Sudan Fertility Survey, shown as the next to last item in Box 4.5, is used to illustrate a reference to a research report. As it happened, a team of researchers and writers prepared this report for the Department of Statistics. The Department, therefore, is listed as the author. Librarians refer to authorship by the organization that publishes a report as thecorporate author. When you find that an individual is not listed as the author of a publication, list the organization that published the report as the author of the report. The title is in normal type.

The last reference is to a document found on a Web site. The format is the same as for an unpublished report, except that the date when the information was retrieved is given, followed by the name of the site, and its address. No punctuation is used at the end of the reference. Formats for references to other electronic sources, such as electronic journals or newsletters, are described on the site of the  American Psychological Association .

Refining your list of references

Your set of note cards will change from day-to-day as you drop some sources and add others. Some references will lead to additional, useful information; others will not. As you work through your original and new references, you will also be coming closer to the end of the literature review phase of your project.   Before stopping, consider browsing in the book collection of your university library.

Browsing is easy because books on the same subject are placed together on library shelves. When you find a reference to a book clearly related to your research question, go to the book's location (given by its call number) and examine the titles of nearby books. You might find additional books that are useful as well. To save time, start by examining the indexes at the back of the book. Look for topics that relate to your interest.

With a slight variation, browsing can also be applied to journals. When you find that several valuable articles came from the same journal, examine other issues of that journal. You may find additional valuable articles. With journals, scan the pages of contents to identify titles of articles that grab your attention, and then read each article you select critically.

Creating a list of references

As you finish your review of literature, you will have a pile of note cards with references to publications and possibly Web sites from which you took information. As part of your report, you will need to covert the references you created on your note cards to a list of referencesfor your report. To create this list, place the references in alphabetical order according to the last names of the authors and create a list with the full reference to each publication or source of information you mention in your report. For an example of how this is done, click on References and look at the references we cite.

Searching Internet sources

Page 49: Glassory of Research

We have added this section on searching the Internet because of its increasing importance to researchers.

Understanding the Internet and Web sites

The terms Internet and Web sites are often used to describe the same thing, but, technically, they refer to different things. The Internet is the set of connections and common means for linking users to Web sites throughout the world. The Web sites contain the information one can access through the communications and linkages provided by the partners who make up the Internet. Today, Web sites are maintained by most major organizations, including universities, research centers, government ministries, professional organization, international organizations, and businesses. The ever-increasing numbers of Web sites offer information on almost any topic you can think of. Because of the growing importance of Web sites as a source of research information, researchers routinely check Web sites for information. If you have access to the Internet, we recommend you include a Web search as part of your literature review.

Web site addresses

Each Web site has a unique address referred to as its Universal Resource Locator or URL. In the beginning, the addresses for all Web sites began with "www," which stands for theWorld Wide Web. Now, many Web sites do not include the "www" prefix; instead, they begin with some short version or the initials of the name of the organization sponsoring the site. For example, the Web address for the World Bank is www.worldbank.org while that for the Ahfad University for Women, in Omdurman, Sudan, is http://ahfad.org. You will see Web addresses in both forms with various sets of letters and numbers.   Also, you may see Web address set off by with a set of arrowheads, < and >. These are not part of the addresses; they are used to set an address off from any text around the address. Also, Web addresses may end in a variety of ways. Some may include the letters "html," but others may have only "htm" or some other ending. Be sure to note the full and exact address of any site you see and wish to return to; otherwise, you will have difficulty returning to it.  

Kinds of information on Web sites

Many kinds of information are available on Web sites. We describe five kinds of information sources that are particularly useful to researchers. These are:

Social science databases Sites of international organizations Other social science Web sites Guides to specialized Web sites Full text sites

Social science databases

A database is a computer-based index to journal articles, books, and research reports in some field of knowledge. Some leading databases covering the social sciences are described in Box 4.6. Each provides quick online access to information about thousands of articles and other research reports. Each also:

Page 50: Glassory of Research

Can be (has to be) searched by computer; Provides the full reference of the cited publication; and Supplies a short summary or abstract of the publication.

Because of the number of references they contain, databases have to be searched by computer. This lets you quickly find material matching your specific interest from the thousands of other reports in the database. Many databases are accessible through Web sites. Some are also available on CD ROM (Compact Disk Read Only Memory). You might want to check to see if your library subscribes to any database in your area of interest on CD ROM.

Searching a database through a Web site, requires a connection from your location to the Internet, and, for most databases, a subscription is required as well. You may wish to see if your library subscribes to any of the databases we list or others in your field of interest. Two widely used databases, POPLIN and ERIC, however, can be searched at no charge.  

Illustrative database search

We will illustrate database searching using POPLINE, because it is free and covers topics of interest to persons in developing countries. If you want to search the ERIC database, detailed help is available at ERIC Slide Show.

Information from most databases can be downloaded to the disk on your computer or a floppy disk or be printed out. Each also provides a short abstract of each document in the respective database. Abstracts describe the problem that was investigated, methods used, findings, and conclusions.   Often, an abstract will provide all the information you will need. In addition, POPLINE and ERIC can provide copies of the full text of most publications in case users want more details than provided in abstracts. POPLINE Digital Service providesfree copies of documents for users in developing countries. For more information, go to POPLINE and click on "Document Delivery Policy" in the list of topics on the left side of the page. A number of the documents cited in ERIC can also be downloaded free.

Chapter 5: Creating a Research Design

Design and the purpose of the research

Your research design is your plan for providing a sound, and, if possible, conclusive answer to your research question. Designs vary greatly, depending on the research question being addressed and the methods of data collection the investigator chooses to use. Creating a design starts with the purpose of the research.

Most research is done for one of three purposes. The most common purpose is to describesome set of variables or relationships among them as accurately as possible. Other purposes of research are to explore a topic to learn more about it or to explain why certain social patterns or relationships occur as they do. Some studies combine more than one purpose. For example, while the Sudan Fertility Study was designed primarily as a descriptive investigation, we showed how data from this study were analyzed to explain why future fertility rates may decline in northern Sudan. Nevertheless, we will describe each approach separately to show how each purpose affects design decisions. Exploratory research generally requires less rigorous design decisions; so we will start with it.

Page 51: Glassory of Research

Exploratory research

Exploratory research, as the name suggests, is a way of gaining some initial information about a problem or topic. You may be interested in a certain problem, but don't have enough information to write a clear research question. Perhaps you are not sure what the critical variables are or what methods of data collection may work best. Before beginning an investigation, you might decide to explore the problem by doing some informal interviews or by living for a short time with the group you want to study. These techniques are often used in exploratory research.

Example. Julia and Ridha (2001) explored experiences of Kuwaiti women during the Iraqi invasion and following occupation of Kuwait in 1990. Twenty women who played various roles, including active opposition to the Iraqi invaders, told of their experiences before, during, and after the invasion. The women reported that they actively sought their own self-development and liberation, but once the conflict ended, their choices and rights tended to become restricted again by traditional male prerogatives.

Exploratory studies, however, seldom provide satisfactory answers to research questions. One reason is that most exploratory studies are based on samples too small to permit generalizing the results to a larger population. This is certainly true of the study of Kuwaiti women. But exploratory studies can provide valuable, even critical, information for designing larger scale descriptive or explanatory studies.

Descriptive research

Descriptive research is more specific and focused than exploratory research. The researcher starts with a well defined problem or research question and a clearly defined plan for collecting and analyzing data. Descriptive research is intended to produce clear, well-founded answers to some question or specific, factual information. Surveys are frequently used in descriptive research.

Example. Fattah (1981) conducted interviews, using a questionnaire, with 200 randomly selected farm families in Iraq. The questionnaire contained over 200 items about husband-wife decision-making related to the operation of the farm, financial matters, social life, entertainment, training of children, and childcare. She also obtained information on whether the families had a "modern" or "traditional" view of things. These data provided a detailed description of the degree to which wives participated in decision-making with their husbands. Fattah concluded that factors associated with modernism will continue to increase and will enhance the position of women in rural Iraq, which, today, remains male dominated.

Explanatory research

Explanatory research goes beyond exploratory or descriptive research by trying to find the reasons why certain relationships occur. It seeks to provide explanations for what has been observed. Explanations are based on interpretation of findings in terms of broader concepts and accepted theory.

Example. Using extensive interviews and observation, Dei (1992) developed an explanation for how Ghanian villagers survived prolonged drought and related hardships. He found they got through hard times by shifting from farm production for a market economy to shared production

Page 52: Glassory of Research

with other villagers for household consumption. A strong sense of community solidarity developed and allowed the villages to survive with what they could produce together. The explanation for their survival was the successful return to earlier farming methods.

Regardless of the purpose of your research, you will also need to decide on a technique for data collection. There are two contrasting yet complimentary techniques - quantitative andqualitative means of collecting data.

Quantitative and qualitative data

Many fields of science, particularly the physical sciences, report observations in terms of numbers. Observations reported in numbers are referred to as quantitative data. Social scientists obtain and analyze quantitative data, as you have seen with various studies we have cited. Social scientists also obtain and analyze data in qualitative form. These are observations recorded in words - descriptions of what persons said or did, how they interacted with one another, or what a researcher observed by watching their behavior. Each technique is useful; each also has certain strengths and weaknesses; each also presents a different set of design decisions.

Some researchers draw a sharp distinction between quantitative and qualitative research. A different view is presented in Types of Data. This discussion shows how quantitative data is based in part on qualitative judgments and that qualitative data can also be described and analyzed numerically.

Quantitative research

A quantitative approach requires a well-developed research design. Quantitative research is usually based on:

Careful and precise specification of the question to be answered; Identification, definition, and measurement of the key variables; Selection and specification of one or more methods of collecting data; Development of a sampling plan; and Numerical analysis of the data, including use of appropriate statistical tests.

Numbers used in quantitative studies may be as simple as counting the number of "Yes" versus "No" responses to some question. More complex forms of measurement, however, are generally used. The descriptive research by Fattah (1981), for example, used a quantitative design. The many decision-making variables were defined and measures were developed for each. These measures, in the form of questions, were then combined in a questionnaire that was used in interviewing a random sample of farm families.

Quantitative data has important strengths. Using numerical measures provides more precise descriptions of variables. In addition, quantitative research permits use of larger samples. As you will see in Chapter 19, "Performing Inferential Statistical Analyses," larger samples provide a stronger basis for generalizations. Also, quantitative data can be combined and analyzed using various statistical techniques. You are certainly familiar with averages and percentages as ways of summarizing data. Later, in Part 4, you will learn about additional ways of analyzing quantitative data.

Page 53: Glassory of Research

Most of this site focuses on planning and conducting quantitative research and analyzing data based on quantitative variables.

Qualitative research

Qualitative research, in contrast to quantitative research, seeks to dig deeper into the reasons for behavior we observe. Generally, researchers doing qualitative research start with a broad idea of what they are looking for. In qualitative studies, researchers often combine informal interviews, in the form of long conversations with people, with systematic observation of their daily activities. Data are derived continuously from what the persons in the group being studied do and say, how they say they feel about things, and the reasons they give the researcher for what they do. The researcher accumulates a large amount of data, generally in the form of long sets of notes and written descriptions of what was observed. As data are recorded, the researcher tries to interpret what the data mean and to develop an explanation for what has been observed. In this sense, analysis of data proceeds with data collection. Qualitative research is generally a much more flexible process, allowing researcher to take advantage of new lines of inquiry as they develop.

The research studies cited earlier by Julia and Ridha (2001) and by Dei (1992) were based on qualitative research methods. Research by ElSayed and Ahmed (2001) on the "Socio-Cultural Aspect of Kala-azar among the Masalit and Hansa Tribes" further illustrates use of qualitative techniques. (Kala-azar is a severe eye disease that frequently results in blindness). The authors used small group discussions with villagers, personal interviews with patients, and direct observation of health and sanitation practices in the villages. They found that lack of knowledge about the disease is the main reason for its spread.

We have just touched on qualitative research in this section. In Chapter 13, we describe some of the more frequently used qualitative research methods.  

Combining quantitative and qualitative techniques

Researchers frequently use both quantitative and qualitative techniques in the same investigation. In this way, researchers benefit from the strengths of each approach and minimize their respective limitations. Altareb (1997), for example, investigated attitudes toward Muslims held by undergraduate students at an American university. He began by conducting focus groups, a qualitative technique, to get an initial understanding of students' attitudes. After this exploratory phase, he developed a quantitative measure of attitudes for the main part of the investigation. Davidson (1992) also combined qualitative and quantitative methods in his investigation of ways families in the Nuba Mountains of the Sudan were adapting to rapidly changing socio-economic conditions in their area. He began by interviewing village elders and then conducted a survey to identify the characteristics of each of the 12 villages in his study area. Following the survey, he carried out in-depth interviews with selected households. This combination of methods provided a rich mix of quantitative data for describing adaptive techniques used by villagers and detailed and qualitative data for understanding how they were adapting to changes.

Unit of analysis

Page 54: Glassory of Research

In designing research you also have to decide on what basis you will analyze the data you will collect. This requires a decision about the unit of analysis for the study. Units of analysis may consist of individuals, groups, organizations, geographical places, or other entities. Generally the way you state your research question indicates which unit will be used as the basis of analyzing the data.

It is easy to confuse unit of analysis with the entities you collect data from. Frequently, they are the same. In the Sudan Fertility Survey, for example, the data were collected from women and the data were analyzed in terms of the same women. In this case, the decision was clear. In another study, a researcher might interview factory managers to get information to compare the efficiency of different sized factories. In this case, the unit of analysis is factories, not managers because the purpose of the research is to compare factories and managers happened to be the source of data.   If you have any doubt, think about the question you are trying to answer. How do the data have to be organized to answer the question?

ndividuals as the unit of analysis

When the purpose of research is to say something about individuals, they become the unit of analysis. Individuals are the most frequent unit of analysis. They may be any age, either gender, or become the units of analysis because they have a common characteristic. Primary, intermediate and secondary students were the units of analysis in a trend study by Badri and Burchinal (1985). Many studies are based on collecting data from university students, who then become the unit of analysis. Cook (2001), for example, investigated the attitudes of Egyptian university students toward Islam and higher education; Had-Elzein and Ahmed (2001) obtained data on perceptions and vision of peace from Sudanese university students. Married women were the unit of analysis in the Sudan Fertility Survey (1982). Mageed, Sulima and Kawther (2000) collected data about attitudes toward female genital mutilation from adult men and women. Agricultural technical workers were the unit of analysis in a study of worker satisfaction in Kenya (Mulinge and Mueller, 1998). Muneer (1989) selected farmers, who became the unit of analysis, in his study of the role of cooperatives in western Sudan. Khalafalla (2001) selected policy-makers as her sample who also became her unit of analysis.    

Groups as the unit of analysis

Research is often conducted by collecting and analyzing data for groups of people. When the purpose is to describe two or more persons as a unit, these groups become the unit of analysis. Households, consisting of various numbers of persons, often are used as the unit of analysis. Examples include the research by Julia and Ridha (2001) on Iraqi farm families and research by Davidson (1972) on changes in household life in the Nuba Mountains. Households were the unit of analysis in an extensive study of year-long spending patterns among farm families in Sierra Leon (King and Byerlee, 1977). The sample used by Grotberg and Badri (1986) consisted of intact families with a child not older than five and one-half years. The mother-child pairs in these families became the unit of analysis in this study. Other units of analysis at the group level include friendship groups, clubs, social groups, and groups of street children.

Organizations as the unit of analysis

Research often is directed at learning about organizations, such as businesses, firms, ministries of government, universities, political parties, religious bodies, or military units. Although data about each organization may be obtained by interviewing members of an organization, the data

Page 55: Glassory of Research

would be used to describe and compare features of each organization. For example, Gimbel and associates (2002) studied job satisfaction in community-based organizations. They collected data from employees but analyzed their data in terms of characteristics of the organizations. Mohamed (2002) analyzed operations in a sample of 150 government divisions in the United Arab Emirates. In a study of safety programs, Vredenburgh (2002) collected data about practices in 62 hospitals. These are only a few of the many management studies based on organizations as the unit of analysis.  

Geographical places as the unit of analysis

Places, such as where people live, can also become the focus of study and therefore be used as the unit of analysis. Fishing villages in Uganda were the unit of analysis in a socio-economic investigation by Namara (1997). Cities, provinces, and rural versus urban areas are used as the unit of analysis in most reports issued by   international organizations. Countries frequently are used as the unit of analysis. Bulluck (1986) compared the extent to which developing countries meet the social needs of citizens. Countries were also used in comparing   infant mortality rates throughout sub-Saharan Africa (Frey and Field, 2000). Wang (1996) examined the extent to which women's reproductive rights were protected in 101 countries.

Other units of analysis

Although most social research is based on the kinds of units of analysis just mentioned, other things may be selected for analysis. These include the contents of newspapers and magazines (Badri and Osama, 1995; Swanjord, 1989); rural development projects (Rao,1981); and folk tales (Mathews, 1985).

Box 5.1 summarizes frequently used units of analysis.

Box 5.1. Some frequently used units of analysis

1. Individuals - by far the most frequently used2. Groups- husband-wife pairs, social groups3. Organizations—businesses, offices, factories, government

ministries4. Classrooms, schools, university student bodies or groups5. Geographical areas- villages, communities, cities, regions of

a country, entire countries6. Mass media materials- newspapers, magazines, television

shows

Deciding on the unit of analysis

Most likely the unit of analysis you will use in a study will be clear from the way you defined your problem. If you have any doubts about the unit you should use, you will need to resolve this confusion before going on. Otherwise, you may not analyze your data properly and your entire study could be jeopardized. Ask yourself what unit you will base your analysis on. This is, as the term implies, your unit of analysis. Also, in the process of analyzing your data, you may shift from one unit to another. In the study by Gimbel and associates (2002) mentioned under organizations as the unit of analysis, the authors used organizations as the unit of analysis.

Page 56: Glassory of Research

However, since they collected data from employees, they could have undertaken other analyses as well. For example, they might have become interested in the level of morale among employees and the length of time each worked for one of the organizations. In this case, they would have shifted to employees as the unit of analysis. Our point is simple: just be sure of your unit of analysis and that you have data appropriate to that unit.

Collecting data at the lowest possible unit

Also, remember an important rule: Regardless of what you are studying, always obtain data in terms of the lowest unit of analysis possible. There is a simple reason for this. You can usually combine data collected at a lower level into a higher level for analysis, but it won't work the other way around. For example, if you intended to analyze data about the production of things by households, collect data about the production activities for each member of the household. Then you can combine the data from each household member for analysis at the level of the entire household. This way you can still do other analyses at the level of members of the household, such as what wives and daughters produced in contrast to husbands and sons, as well as for the entire household. But if you asked only for production of the household as a whole, you would not be able to describe production by separate members of the household.

Design alternatives

Designing research is a creative process. Designs can be put together in a number of ways. By describing some of

the more frequently used designs, we hope you will get an idea of the various kinds of research questions you can ask and designs you can use in answering them. To start, designs can be developed for: (1) describing variables as they currently exist (present time); (2) for describing changes that have occurred (looking at the past); and (3) for describing changes as they occur (present into the future). These time dimensions are illustrated inFigure 5.1. We start with the typical survey design, shown by Illustration A in Figure 5.1

Glossary Of Marketing Terms

Aided recall. Respondents are asked if they remember a commercial for the brand being tested.

Alternative hypothesis. A competing hypothesis to the null.

Attitude. A learned predisposition to respond in a consistently favourable or unfavourable manner with respect to a given object.

Audit. A formal examination and verification of either how much of a product has sold at the store level (retail audit) or how much of a product has been withdrawn from warehouses and delivered to retailers (warehouse withdrawal audits).

Balanced scale. Scale using an equal number of favourable and unfavourable categories.

Page 57: Glassory of Research

Banner. The variables that span the columns of the cross-tab; generally represents the subgroups being used in the analysis.

Before-after design. Experiment where a measurement is taken from respondents before they receive the experimental treatment condition; the experimental treatment is then introduced, and the post-treatment measurement is taken.

Before-after with control design. Experiment that adds a control group to the basic before-after design; the control group is never exposed to the experimental treatment.

Between-group variations. Between-group differences in scores for groups that were exposed to different treatments - represents "explained" variation.

Blind testing. Tests where the brand name of the product is not disclosed during test.

Cartoon completion test. Projective technique that presents respondents with a cartoon of a particular situation and asks them to suggest the dialogue that one cartoon character might make in response to the comment(s) of another cartoon character.

Causality. Relationship where a change in one variable produces a change in another variable. One variable affects, influences, or determines some other variable.

Chi-square test statistic. Measure of the goodness of fit between the numbers observed in the sample and the numbers we should have seen in the sample, given the null hypothesis is true.

Cognition. A person's knowledge, opinions, beliefs and thoughts about the object.

Comparative scaling, (non-metric scaling) Scaling process in which the subject is asked to compare a set of stimulus objects directly with one another.

Comparison product test. Designs where a consumer rates products by directly comparing two or more products.

Concept board. Illustration and copy describing how the product works and its end-benefits.

Concept evaluation tests. Concept tests designed to gauge consumer interest and determine strengths and weaknesses of the concept.

Concept screening test. Concept tests for screening new product ideas or alternative end-benefits for a single product idea.

Concept test. Collection of information on purchase intentions, likes/dislikes and attribute rating in order to measure the relative appeal of ideas or alternative positioning and to provide direction for the development of the product and the product advertising.

Concept. An idea aimed at satisfying consumer wants and needs.

Page 58: Glassory of Research

Concept/construct. Names given to characteristics that we wish to measure.

Confidence interval. Range into which the true population value of the characteristic being measured will fall, assuming a given level of certainty.

Confounds or confounding variables. Extraneous causal factors (variables) that can possibly affect the dependent variable and, therefore, must be controlled.

Connotative meaning. The associations that the name implies, beyond its literal, explicit meaning; the imagery associated with a brand name.

Constant sum scale. Procedure whereby respondents are instructed to allocate a number of points or chips among alternatives according to some criterion - for example, preference, importance, and so on.

Constitutive definition. Specifications for the domain of the constructs of interest so as to distinguish it from other similar but different constructs.

Continuous rating scale. (graphic rating scale) Procedure that instruct the respondent to assign a rating by placing a marker at the appropriate position on a line that best describes the object under study.

Control test market. Method in which the entire test market project is handled by an outside research company.

Copy recall. Percentage of respondents in the programme audience that correctly recalled copy elements in the test commercial.

Cross-price elasticity of demand. The percentage of change in demand for one product divided by the percentage change in price of the second product, assuming that all other factors affecting demand are constant.

Diary panels. Samples of households that have agreed to provide specific information regularly over an extended period of time. Respondents in a diary panel are asked to record specific behaviours as they occur, as opposed to merely responding to a series of questions.

Delphi method. A method of forecasting based on asking a group of experts for their best estimate of a future event, then processing and feeding back some of the information obtained, and then repeating the process; on the last set of responses, the median is usually chosen as the best estimate for the group.

Dependent variable. A variable whose value is thought to be affected by one or more independent variables. For instance, sales (dependent variable) are likely to be a function of advertising, availability, price, degree of competitive advantage, customer tastes, etc.

Depth interview ("one-on-one"). Sessions in which free association and hidden sources of feelings are discussed, generally through a very loose, unstructured question guide,

Page 59: Glassory of Research

administered by a highly skilled interviewer. It attempts to uncover underlying motivations, prejudice, attitudes toward sensitive issues, etc.

Dollar metric scale. (graded paired comparison) Scale that extends the paired comparison method by asking respondents to indicate which brand is preferred and how much they are willing to pay to acquire their preferred brand.

Double-barrelled questions. Questions in which two opinions are joined together.

Dummy magazine test. A realistic-looking test format using a dummy magazine that systematically varies the advertisements in such a way that some families receive magazine containing the test ad and other (matched) families receive a dummy magazine containing no ads at all.

Duo-trio designs. Test where a respondent is given a standard product and asked to determine which of two other products is more similar.

Electronic process. Review of the questionnaires for maximum accuracy and precision.

Ethnography. The systematic recording of human cultures.

Experimental design. A contrived situation designed so as to permit the researcher to manipulate one or more independent variables whilst controlling all extraneous variables and measuring the resultant effects on a dependent variable.

Filter question. A question that is asked to determine which branching question, if any, will be asked.

Focus group interview. Interview in which the interviewer listens to a group of individuals, who belong to the appropriate target market, talk about an important marketing issue.

Forced itemised test. Procedure in which a respondent indicates a response on a scale, even though he or she may have "no opinion" or "no knowledge" about the question.

Frequency distribution. The number of respondents who choose each alternative answer as well as the percentage and cumulative percentage of respondents who answer.

Funnel sequence. The procedure of asking the most general (or unrestricted) question about the topic under study first, followed by successively more restricted questions.

Gross incidence. Product/category use incidence for the entire population.

Hypothesis. An assumption or guess the researcher or manager has about some characteristic of the population being sampled.

Independent variable. A variable over which the researcher is able to exert some control with a view to studying its effect upon a dependent variable. For instance, an experiment

Page 60: Glassory of Research

may be conducted where the price (independent variable) of a dozen boxed carnations is varied and the sales (dependent variable) is observed at each price set.

Internal secondary data. Data available within the organisation - for example, accounting records, management decision support systems, and sales records.

Interval data. Measurements that allow us to tell how far apart two or more objects are with respect to attributes and consequently to compare the difference between the numbers assigned. Because the interval data lack a natural or absolute origin, the absolute magnitude of the numbers cannot be compared.

Itemised (closed-ended) questions. Format in which the respondent is provided with numbers and/or predetermined descriptions and is asked to select the one that best describes his or her feelings.

Itemised rating scaling. The respondent is provided with a scale having numbers and/or brief descriptions associated with each category and asked to select one of the limited number of categories, ordered in terms of scale position, that best describes the object under study.

Judgemental sampling. Studies in which respondents are selected because it is expected that they are representative of the population of interest and/or meet the specific needs of the research study.

Judgemental data. Information generally based on perceptions or preference may give better indications of future patterns of consumption.

Jury of expert opinion. A method of forecasting based on combining the views of key executives.

Laboratory experimental environment. Research environment constructed solely for the experiment. The experiment has direct control over most, if not all, of the crucial factors that might possibly affect the experimental outcome.

Likert scale. Scaling technique where a large number of items that are statements of belief or intention are generated. Each item is judged according to whether it reflects a favourable or unfavourable attitude toward the object in question. Respondents are then asked to rate the attitude towards the object on each scale item in terms of a five-point category labelled scale.

Line marking. Similarity judgements recorded by making a mark on a 5-inch line anchored by the phrases "exactly the same" and "completely different".

Line marking/continuous rating non-comparative scale. Procedure that instructs the respondent to assign a rating by placing a marker at the appropriate position on a line that best describes the object under study. There is no explicit standard for comparison.

Page 61: Glassory of Research

Loaded questions. Questions that suggest what the answer should be or indicate the researcher's position on the issue under study.

Loadings. Weightings that give the correlation of the attribute with respect to the dimension.

Magnitude estimation. Scale in which respondents assign numbers to objects, brands, attitude statements, and the like so that ratios between the assigned numbers reflect ratios among the objects on the criterion being scaled.

Mail diary services. General term for services involving a sample of respondents who have agreed to provide information such as media exposure and purchase behaviour on a regular basis over an extended period of time.

Mail surveys. Data-collection method that involves sending out a fairly structured questionnaire to a sample of respondents.

Mall-intercept personal survey. Survey method using a central-location test facility at a shopping mall; respondents are intercepted while they are shopping.

Market segment. Subgroups of consumers who respond to a given marketing-mix strategy in a similar manner.

Maturation. Threat to internal validity; refers to changes in biology or psychology of the respondent that occur over time and can affect the dependent variable irrespective of the treatment conditions.

Measurement. Process of assigning numbers to objects to represent quantities of attributes.

Monadic products test. Designs where a consumer evaluates only one product, having no other product for comparison.

Mortality. Threat to internal validity; refers to the differential loss (refusal to continue in the experiment) of respondents from the treatment condition groups.

Nominal data. Measurement in which the numbers assigned allow us to place an object in one and only one of a set of mutually exclusive and collectively exhaustive classes with no implied ordering.

Non-comparative scaling (monadic scaling). Scaling method whereby the respondent is asked to evaluate each object on a scale independently of the other objects being investigated.

Non-probability samples. Form of sampling where there is no way of determining exactly what the chance is of selecting any particular element or sampling unit into the sample.

Page 62: Glassory of Research

Non-response error. Error that occurs because not all of the respondents included in the sample respond; in other words with non-response, the mean true value (on the variable of interest) of the sample respondents who do respond may be different from the entire sample's true mean value (on the variable of interest).

Non-sampling error. Degree to which the mean observed value (on the variable of interest) for the respondent of a particular sample agrees with the mean true value of the particular sample of respondents (on the variable of interest).

Observational methods. Observation of behaviour, directly or indirectly, by human or mechanical methods.

Optical scanning. Direct machine reading of numerical values or alphanumeric codes and transcription onto cards, magnetic tape, or disk.

Order bias. Condition whereby brands receive different ratings depending on whether they were shown first, second, third, etc.

Ordinal data. Measurement in which the response alternatives define an ordered sequence so that the choice listed first is less (greater) that the second, the second less (greater) than the third, and so forth. The numbers assigned do not reflect the magnitude of an attribute possessed by an object.

Over-registration. Condition that occurs when a sampling frame consists of sampling units in the target population plus additional units as well.

Paired comparison designs. Tests where a consumer directly compares two products.

Paired comparison scale. Scale that presents the respondent with two objects at a time and asks the respondent to select one of the two according to some criterion.

Primary data. Data collected for a specific research need; they are customised and require specialised collection procedures.

Print ad tests. Attempts to assess the power of an ad placed in a magazine or newspaper to be remembered, to communicate, to affect attitudes, and ultimately, to produce sales.

Probability sampling designs. Samples drawn in such a way that each member of the population has a known, non-zero chance of being selected.

Project proposal. A written description of the key research design that defines the proposed study.

Projective techniques. A class of techniques which presume that respondents cannot or will not communicate their feelings and beliefs directly; provides a structured question format in which respondents can respond indirectly by projecting their own feelings and beliefs into the situation while they interpret the behaviour of others.

Page 63: Glassory of Research

Proportional allocation. Sampling design guaranteeing that stratified random sampling will be at least as efficient as SRS. The number of elements selected from a stratum is directly proportional to the size of the stratum.

Purchase intent scale. Procedure attempting to measure a respondent's interest in a brand or product.

Q-sort scale. Rank order procedure in which objects are sorted into piles based on similarity with respect to some criterion.

Qualitative research methods. Techniques involving relatively large numbers of respondents, which are designed to generate information that can be projected to the whole population.

Quota sampling. Design that involves selecting specific numbers of respondents who possess certain characteristics known, or presumed, to affect the subject of the research study.

Random sampling error. Error caused when the selected sample is an imperfect representation of the overall population; therefore, the true mean value for the particular sample of respondents (on the variable of interest) differs from the true mean value for the overall population (on the variable of interest).

Random sources of error. Denoted by X, component made up of transient personal factors that affect the observed scale score in different ways each time the test is administered.

Range. Differences between largest and smallest values of distribution.

Rank-order scale. Scale in which respondents are presented with several objects simultaneously and requested to "order" or "rank" them.

Ratio data. Measurements that have the same properties as interval scales, but which also have a natural or absolute origin.

Recall. Measures of how many people remember having seen the test ad both on an unaided and aided basis.

Related samples. The measurement of the variables of interest in one sample can affect the measurement of the variable in some other sample.

Residual. An error term representing the difference between the actual and predicted values of the dependent variable.

Response error. Error that occurs because respondents (who do respond) may give inaccurate answers, or a respondent's answers may be misrecorded.

Page 64: Glassory of Research

Response rates. The total number of respondents sent questionnaires who complete and return them, expressed as a percentage.

Sample two-stage cluster sampling. Design in which the clusters at the first stage are selected by SRS; at the second stage the sampling units are selected probabilistically by SRS from each sample cluster so that with clusters of equal size the same fraction of sampling units is drawn from each sample cluster.

Sample. A subset of the target population from which information is gathered to estimate something about the population.

Sampling frame. An explicit list of individuals or households that are eligible for inclusion in the sample.

Sampling interval. Computed by taking n/N together with r, the first chosen element to be included in the sample, determines which elements will be included in the sample.

Sampling units. The elements that make up the population.

Sampling variable. Variable that represents the characteristic of the population that we wish to estimate.

Sampling. Identification of a group of individuals or households (or institutions or objects) that can be reached by mail, telephone, or in person, and that possess the information relevant to solving the marketing problem at hand.

Scale transformation. Procedures for transforming data by one of a number of simple arithmetic operations to make comparisons across respondents and/or scale items.

Secondary data. Data that have been collected for another project and have already been published. Sources can be in-house or external.

Selection bias. Threat to internal validity; refers to the improper assignment of respondents to treatment conditions.

Semantic differential scale. Semantic scale utilising bi-polar adjectives as end points.

Sentence completion. Projective technique whereby respondents are asked to complete a number of incomplete sentences with the first word or phrase that comes to mind.

Simple one-stage cluster sampling. One-step design in which the first stage clusters all sampling units are selected by SRS, and within each selected cluster all sampling units are chosen.

Simple random sampling. Design guaranteeing that every sample of a given size as well as every individual in the target population has an equal chance of being selected.

Page 65: Glassory of Research

Simple weighting. Procedure that attempts to remove non-response bias by assigning weights to the data that in some sense account for non-response.

Simulated test market. Method whereby various groups of pre-selected respondents are interviewed, monitored and sampled about the new product; in addition, respondents may be exposed to various media messages in a controlled environment.

Single-stage cluster sample. One step design where, once the sample of clusters is selected, every sampling unit within each of the selected clusters is included in the sample.

Snowball design. Sample formed by having each respondent, after being interviewed, identify others who belong to the target population of interest.

Split-halves. Scale items split in terms of odd-and even-numbered Hems or randomly. Standard deviation. Index of variability in the same measurement units used to calculate the mean.

Standard error (s). Indication of the reliability of an estimate of a population parameter; it is computed by dividing the standard deviation of the sample estimate by the square root of the sample size.

Stapel scale. Procedure using a single criterion or key word and instructing the respondent to rate the object on a scale.

Store audits. Studies that monitor performance in the marketplace among dollar and unit sales/share, distribution/out of stock, inventory, price, promotional

Stratified sampling. Design that involves partitioning the entire population of elements into sub-population, called strata, and then selecting elements separately from each sub-population.

Survey. A method of gathering information from a number of individuals (the respondents, who collectively form a sample) in order to learn something about a larger target population from which the sample was drawn.

Syndicated research services. Market research suppliers who collect data on a regular basis with standardised procedures. The data are sold to different clients.

Systematic sampling. Design whereby the target sample is generated by picking an arbitrary starting point (in a list) and then picking every nth element in succession from a list.

Systematic sources of error. Denoted by X, component made up of stable characteristics that affect the observed scale score in the same way each time the test is administered.

Target population. Set of people, products, firms, markets, etc., that contain the information that is of interest to the researcher.

Page 66: Glassory of Research

Telephone surveys. Survey that involves phoning a sample of respondents drawn from an eligible population and asking them a series of questions.

Telescoping. Condition that occurs when a respondent either compresses time or remembers an event as occurring more recently than it actually occurred.

Test markets. A system that allows the marketing manager to evaluate the proposed national marketing program in a smaller, less expensive situation with a view to determining whether the potential profit opportunity from rolling out the new product or line extension outweighs the potential risks.

Thematic apperception test (TAT). Projective technique presenting respondents with a series of pictures or cartoons in which consumers and products are the primary topic of attention.

Third person/role playing. Projective technique that represent respondents with a verbal or visual situation and asks them to relate the feelings and beliefs of a third person to the situation, rather than to directly express their own feelings and beliefs about the situation.

Top-down approach. Process of breaking down clusters: or, at the beginning, all respondents belong to one segment, and then respondents are partitioned into two segments, then three segments, and so on until each respondent occupies his or her own segment.

Tracking. System for measuring the key sales components of customer awareness and trail and repeat purchases.

Trade-off procedure. Technique where the respondent is asked to consider two attributes at a time - to rank the various combinations of each pair of attribute descriptions from a most preferred to least preferred.

Treatment. A reference to an independent variable that has been manipulated by the researcher. For example, a researcher may be investigating the customer benefits of three prototype packaging designs in order to determine which design to use. The independent variable which is manipulated is product packaging.

Treatment. Term for that independent variable that has been manipulated.

Triangle designs. Tests where a respondent is given two samples of one product and one sample of another and asked to identify the one that differs.

Two-tail hypothesis test. Test used when the alternative hypothesis is non-directional - the region of rejection is in both tails of the distribution.

Type I error. Situation occurring when the null hypothesis is in fact true, but is nevertheless rejected on the basis of the sample data.

Page 67: Glassory of Research

Type II or beta error. Situation occurring when we fail to reject the null hypothesis (HO), when in fact the alternative (HA) is true.

Unaided questions. Questions that do not provide any clues to the answer.

Unaided recall. Respondents are asked if they remember seeing a commercial for a product in the product category of interest.

Unbalanced scale. Scale using an unequal number of favourable and unfavourable scale categories.

Unfinished scenario story completion. Projective technique whereby respondents complete the end of a story or supply the motive for why one or more actors in a story behaved as they did.

Unstructured interview. Method of interviewing where questions are not completely predetermined and the interviewer is free to probe for all details and underlying feelings.

Utility scale values. Ratings that indicate how influential each attribute level is in the consumer's overall evaluations.

Validation. Procedure where between 10 and 20 percent of all respondents "reportedly" interviewed are recontacted by telephone and asked a few questions to verify that the interview did in fact take place.

Validity. Refers to the best approximation to truth or falsity of a proposition, including propositions concerning cause-and-effect relationships.

Word association. Projective technique whereby respondents are presented with a list of words, one at a time, and asked to indicate what word comes immediately to mind.

This is one of a series of four texts on marketing and agribusiness prepared by an FAO project for use in universities and colleges teaching agricultural marketing, agribusiness and business studies. This text, Marketing research and information systems, reviews the role of marketing research and the techniques used to undertake market research, including questionnaire design and sampling and writing of a research report. The principal components of a marketing information system and the use of marketing research information in decision-making are discussed.