Research methodology report writing pdf




















In the long run, such random errors population means minus sample mean add up to zero. In other words, the larger the sample, the closer the true value of the population is approached. As a tool of analysis, statistics is the theory and method of analyzing quantitative data obtained from samples of observations.

It helps in decision making on whether to accept or reject hypothesized relations between variables and in making reliable inferences from empirical observations. If observation fits the chance model, it is said that observed relations are not statistically significant. If not, the relations are adjudged to be significant. Sampling Methods There are two broad types of sampling methods. Probabilistic Sampling Methods These are sampling methods in which the researcher is required to utilize the principle of randomization or chance procedure in at least one of the stages of the sample process.

There are four basic types, namely: i Simple Random Sampling: In this method, the entire process of sampling is guided by chance procedures. It is the most scientific of sampling methods and is the model on which scientific sampling is based.

However, it is not commonly used in the social sciences and the humanities. This is because it can lead to unrepresentative samples in circumstances in which diversities in the population have to be meaningfully reflected in the sample being drawn up.

We must always remember, therefore, that scientific sampling is not an end in itself. It is only a means to the end of drawing up a representative sample from a given population. Where there is a clash between means and end, the end has to prevail. In any case, as is clear below, other scientific methods of sample can be utilized in the social sciences and humanities to address this problem. The list is the sampling frame. It is, however, more systematic — as the name suggests.

The advantage of systematic random sampling is that it is easier to use that simple random sampling in situations where the sampling frame i. The major weakness of the method, which it shares with simple random sampling, is that it requires a list of the population.

As indicated above, decision on proportional reflection of strata of the population in the sample depends on the goal of the study under consideration. Stratified sampling involves dividing the population into groups called strata and then sampling subjects from within the strata. Cluster sampling on its own part involves dividing the population into large numbers of groups called clusters and then successively sampling such clusters from very large to the smallest of clusters before finally sampling subjects.

Non-Probabilistic Sampling Methods These are non-scientific methods of sampling. They do not apply the principle of randomization in their procedures. The basic ones include the following: i Quota Sampling: This is a method of setting quotas and then meeting such quotas, with little or no attention paid to how the quotas are met or what goes into such quotas. In other words, it is sampling done on purpose. This is the process of empirically observing, codifying and estimating the extent of presence of concepts related to the phenomena under investigation.

Measurement involves three basic steps at the planning and execution stages. These include: i devising measurement strategies ii establishing the accuracy of measurements, and iii establishing the precision of measurements. At this point, it is important to be careful in making operational definitions so as to ensure that such definitions coincide as closely as possible with the meaning of the concept under investigation.

To what extent are such strategies valid? The first question focuses our mind on how to ensure that, in the hands of different people, across time and space, and over repeated trials,. The measurement strategies will yield the same results.

The second question helps us to gauge the extent to which what we set out to measure is actually what we end up measuring. Thus, it assists us in planning for valid measures. A valid measure is one that measures what it is supposed to measure by enhancing the correspondence between itself and the concept if seeks to measure. For instance, you may set out to collect data on the relationship between the ease with which people get registered to vote and voter turn-out-in elections, with an hypothesis to the effect that the easier the voter registration, the higher the voter turn-out.

The level of precision determines the amount of information that can be collected on a given variable. However, the nature of the variable also determines the level or precision that can be attained and amount of information that can be collected on such concepts. Generally, the more the concept under investigation allows for the highest possible precision and full information, the more we should seek such targets in the planning and execution of research.

Precision enhances our capacity to be more complete and informative about the outcome of our study. Obviously, as we move from the first option to the third, level of precision increases and information gets more complete. However, while the variable under reference height can be measured at different levels of precision, some other variables such as sex, religion, service can be measured only at the first level of setting up categories.

It is, therefore, important for us to be able to identify the appropriate level of measurement for any given variable. As is clear below, the level of precision attainable in the measurement of discrete or categorical variables is different from that for continuous variables. Ultimately, and especially so for quantitative studies that require statistical analysis, options for analysis depend on the level at which the data are measured.

In other words, the level of measurement in quantitative studies determines the statistical tools and techniques that can be used to analyse data. Levels of measurement precision There are four levels of measurement conventionally agreed to by the research community, following a classification developed by S. Stevens in For this reason, only statistical tolls that do not assume ordering or meaningful distances should be used to analysed data measured and collected at this level.

The most appropriate statistical measure of central tendencies for data collected at the nominal level of precision is, therefore, the mode. To test relationships arising from such data, appropriate statistical tools of analysis include the Chi-Square x2 tests and their derivatives. Thus, the ordinal level is attained when categories are rank-ordered according to some criterion e. It is either less than or more than others - However, it conveys no information as to the extent of difference between or among the categories.

In other words, there is no information or indication of the distance separating the categories. The most appropriate statistical measure of central tendency at this level is the mode. Appropriate statistical tools for testing relationships include Spearman Rank Order, Median test and Mann-Whitney tests, among others. Such measurement not only orders categories but also indicates distances between them. It has the property of defining the distances between categories in terms of fixed and equal units.

For instance, a thermometer records temperature in degrees and a single degree implies the same amount of heat. In other words, the difference between 40of and 44of is the same as that between 94of and 98of. This level: - orders values, - measures distances between values, - does not, however, have an inherently determined zero point and, therefore, - allows only for the study of differences between values but not their proportionate magnitudes.

For instance, it would be incorrect to say that 80of is twice as much heat as 40of. In social science research as well as in the quantitative aspects of the humanities, it is difficult to find interval-level measures since, in such studies, we tend to deal with variables with true zero points. The most appropriate statistical tool for gauging central tendencies in data collected at this level is the mean.

Appropriate inferential tools include the Pearson tests, regression analysis, anova and T-Test, among others. It assigns real numbers to observations, has equal intervals of measurement, and has absolute true zero point. Examples of variables whose values can be measured at this level are arms, population, conflict and distance.

Essentially, measurement at this level has all the properties of interval level measurement outlined above plus the property that is zero point is inherently defined by the measurement scheme.

This property of a fixed and given zero point means that ratio comparisons can be made along with distance comparisons as, for instance, when we note that a war casualty of , is twice as heavy as one of 50, It should also be noted that only continuous variables can be measured at this level of precision.

Methods of Measurement a Single-item Measures By way of concluding this section on how to design valid measurement procedures, it has to be noted also that while some variables can be measured with a single item on our measuring instrument, some variables are difficult to measure with a single item. Age in years , sex male, female, other , religion Moslem, Christian, traditionalist, other , marital status single, married, divorced, other and height in inches, miles etc are variables that can be measured with a single item.

However, such variables with multiple dimensions or aspects as freedom, democracy, performance, stability, power, and tolerance require multi-item measures. This is where scaling comes in.

It is the process of assigning series of ordered items by using a multiplicity of operational indicators. The resulting summary is the representative measurement of the phenomenon. To illustrate, work out an example in which you seek to measure the extent of democracy in five countries. Identify a number of items with which to measure the extent of democracy e. Then determine the score for each item on each observation Yes attracts a score of one, No attracts zero and add up.

However, they are improvements on indexes that are arbitrary in that they allow for both selection of items and the scoring of individual items to depend largely on the judgment of the researcher. Scales, on the other hand, generally involve procedures that are less dependent on the researcher.

Methods available here include experimentation, document analysis and field methods. At the end of these series of lectures on data collection, participants are expected to be able to: a identify basic methods of data collection b make decisions on data collection methods appropriate for specified research topic c identify and construct measuring instruments appropriate for specified data collection methods. In a sense, this method applies also whenever a more-or-less artificial setting is put in place for the purpose of replicating in a controlled context a real-life possibility for instance, war gaming or simulation.

The manipulated variable is referred to as the independent variable because it is independently manipulated by the researcher. The variable examined for the effects of the manipulation s is conveniently referred to as the dependent variable.

It must be noted that in the social sciences and the humanities, a pure experiment in which the researcher has total control of the research setting, is actually an ideal. Nonetheless, the ingredients of an experiment include the following: a A list of variables, including at least an independent variable called the experimental variable and at least a dependent variable; b At least one experimental or study group to be exposed to the independent variable and at least one control group that will not be exposed to the independent variable.

The assignment of subjects from the population and into the groups are expected to be done randomly and, from time to time, in combination with precision matching in which in addition to randomization, the researcher matches pairs of subjects that are as similar as possible on variables or characteristics being controlled for and assigns one to the experimental group and the other to the control group ; c An appropriate research design. Studies that seek to establish causality X causes Y are embarking on a very ambitious enterprise.

Experiments are useful here because they help to generate data that could assist to develop three crucial types of research evidence that are required to establish causation beyond reasonable doubt.

These are: a Evidence of Concomitant Variation between dependent and independent variables that suggests either that the two are associated or they are not associated. In other words, such evidence indicates the extent to which the variables concomitantly vary whether change in x leads to change in y.

The basic measuring instrument for experimentation is the recording schedule. It takes the form of either an interview schedule or a questionnaire. Issues relating to its construction will be taken up later along with those relating to questionnaire construction. For the purpose of this series of lectures, two basic types of document analysis are identified below.

The method enables the researcher to distill from manifest content elements of latent content, influencing factors and intent of the material in question. However, it deals first and foremost with manifest content, with the line and not between the lines. As outlined above, content analysis is objective in that it prescribes that categories used to collect data must be defined so precisely that different researchers can analyse the same content using these definitions and arrive at the same results.

It is systematic in that it insists that the selection of content to be analysed must be based on a formal, predetermined, unbiased plan. This characteristic separates content analysis from the run-of-the mill, argumentative, biased collection of data to prove a point. Content analysis is often, though not always, quantitative. Its result are usually expressed numerically is such ways as frequencies, percentages, ratios, and contingency tables, among others.

The preference for quantification is based on the assumption that the precise language of mathematics allows for consensus on what is right and what is incorrect. Is it about war or about peace? Is it about strategy or tactics? Is it about quality of personnel, or quality of materials?

Does the content condemn war or commend it? Is it favourable toward adopted strategy or tactic or is its content unfavourable, or neutral? Is its position positive in assessing quality of personnel or materials , or is it negative, neutral, or not clear? Does it approve or disapprove, commend or condemn? For instance, a study of books on the Nigerian Civil War has to answer the question of what type of book?

Fiction or non-fiction? A study of security concerns in radio broadcasts that could express these concerns: news, entertainment, interviews and commentaries. It refers to the grammatical or syntactical sentence — building form in which the content is made or its structural component — how much is fact, preference etc. Is it on the front page of the newspaper, or is it buried inside? Is it the first item on television network news, or the very last?

Does it take 50 pages of a page document, or is it treated in only 50 words in the same document? Stages in Content Analysis i Identify and operationally define your concepts. If, however, sampling for both title of material e. In the same vein, a study of newspaper coverage of Defence Headquarters has to sample for newspapers as well as period in which selected newspapers will be content-analysed. It is the smallest unit or division or segment of content upon which the decision on what kind of score the content is to receive is based.

This coding unit could be a particular amount of space or time, a key word, theme or item. Sometimes, however, a decision on how to score content cannot be arrived at within the basic coding unit. It is to ensure uniformity that, at the planning stage, this problem is anticipated and provided for by the setting up of the context Unit. A coding schedule is essentially like an interview schedule and a questionnaire, with the specific difference that while an interview schedule or questionnaire is administered on and responded to by human beings, a coding schedule has content as its subject.

The details of how to construct a coding schedule are, therefore, adequately covered in the treatment of questionnaire construction below, so long as it is borne in mind that adjustments have to be made in the construction of a coding schedule to put in proper focus the subjects of content analysis.

In the formula, M is the number of coding decisions on which all judges agree. N1, N2, N3 refers to number of coding decisions made by each of the three judges. The closer the coefficient of reliability is to 1, the greater is the confidence to go ahead and use the coding schedule to collect data.

A figure of about 0. For other figures, it is suggested that the clarity in operational definitions be enhanced and the pre-test be repeated until the coefficient of reliability rises to an acceptable level.

In essence, field methods involve the collection of data in the field. It involves the study of human institutions, characteristics or behavior as they occur in their natural settings. Rather, the goal has to be more of establishing co-relationship; that is, to see the degree to which two variables co-relate degree of correlation.

Field methods allow for more normal or natural conditions of selection and exposure. For this reason, it is often suggested that, ideally, laboratory findings should be cross-validated by field studies. In the same manner, suggestive evidence of relationships obtained through field studies should be scrutinized further under the most rigorous control of experimentation. Types of Field Methods These include four basic types a Observation Method i Director observation ii Indirect observation iii Participant observation iv Non-participant observation v Controlled observation vi Uncontrolled observation b The Interview Method i Loosely structured interview ii Highly structured interview, often with interview schedule iii Open interview iv Closed interviews v Face-to-face interviews vi Telephone interviews vii Oral interview viii Internet interviews ix Focus Group Discussion FGD x Panel Studies xi Elite Interview Highly structured interviews are often confused with the Questionnaire.

This basic difference has implications for the construction of measuring instruments, since the questionnaire has to contain more instructions on how it is to be filled than the interview schedule. Experiments require the use of recording schedules that have much in common with interview schedules. For field methods, the instrument varies from field notebooks for observations and certain interviews, including loosely structured, open, oral and elite interviews and FGDs to interview schedules for highly structured interviews and the questionnaire for the questionnaire method.

For those instruments that require elaborate construction as the recording schedule, the coding schedule, the interview schedule and the questionnaire, the general principles underlying such construction are itemized below.

In case s where it is felt that this cannot be exhaustive, create an open space in which to indicate appropriate value. So, be clear and unambiguous.

Use simple language. Avoid vogue words. The general rule, however, is that the section should not carry redundant items. This is especially so for instruments to be administered on people, in order not to complicate response rate problems. Range of Items In constructing the instrument, it is important to make wise use of the range of items available. These include the following: a Filter Items: These help to eliminate subjects as required b General versus specific items: In terms of structure, it is usually preferred that general items be listed before specific ones.

When the subjects are human beings, the question arises: do you list demographic items first — or last? There are two suggestions here.

One is that, because people often get touchy when you start by asking them demographic questions, and also because items that have direct bearing on the research should be asked first, demographic items should be deferred to the very end.

Another suggestion is that demographic items should be listed first because they often provide independent variables, but even more so in studies whose main goal is to study demographic issues. This helps to safe space. They are often problematic. They are also problematic, because they need to simultaneously be extensive be many-sided and intensive as well as sensitive to nuances. The requirements for the analysis and discussion of qualitative data are basically the same as the requirements for the scientific collection of such data, including the need to be rigorous, systematic, exhaustive, critical and objective.

For quantitative data or data that can be expressed in quantities, however, there are steps that need to be taken and specific types of statistical tools of analysis to be considered at the stage of analysis and discussion. These issues are addressed here, along with the organization of research report. Since a whole lecture on research project writing and presentation is to be delivered by someone else at the end of this series of lecture, I have not touched on such matters in the very broad discussion of how to organize your research report highlighted below.

It is expected that, at the end of this series of lectures, participants will be able to a treat and code quantitative data; b construct data matrix manually; c identify and manually employ appropriate statistical tools for descriptive analysis of quantitative data d identify and manually apply and interpret the results of statistical tools for inferential analyses to establish significance and association with regard to nominal and ratio-level data.

Data Preparation, Treatment and Coding At this stage, you go through your measuring instruments to confirm that all relevant items have been responded to. You also confirm that the responses are logical, believable and consistent. For instance, if a year old lady indicates that she has 10 children, you may have to discard that instrument. Also, if in reacting to an open question on height, some respondents fill in figures in feet and others in metres, you will need to adopt a common indicator of height and make necessary conversions.

The next step is to code all responses. It is generally the case that most items on measuring instruments are already pre-coded even before such instruments are used to collect data.

It is, however, also the case that most instruments do contain some un-coded responses. Such responses now have to be coded. Coding is the process of assigning numbers to all values that are not originally expressed in numbers or to recode values already expressed in numbers. Coding is required before analysis can proceed, whether the analysis is being done manually or with the aid of the computer. In the case of computer-aided analysis, the only language the computer understands is that of numbers.

Expressing data in the form of numbers also facilitates analysis done manually. Construction of Data Matrix The next stage is to enter all data into a data matrix. A matrix is a set of row and columns, and a data matrix comprises all data generated in research in such a way that the values of variables are entered down the columns from top down and the data from every subject are entered across the rows from the left to the right. In constructing the data matrix, only one number or symbol should be entered into a cell, and the values of a given variable must occupy the same number of columns.

If, for instance, the values of a variable for instance, age in years range from 1 to 99, then those values must occupy two columns, with 1 being entered into the matrix as 01 and 99 as Descriptive Analysis With the data matrix completed, it is now possible to move on to analysis, beginning with descriptive analysis, of data.

Description begins with a simple frequency count of the occurrence of the values of each of the variables as already reflected in the data matrix. From such frequency counts, we can work out percentages and draw graphical illustrations in the form of bar charts, pie charts, line graphs, histogram, etc, as appropriate.

The next stage in descriptive analysis is to seek to capture central tendencies or variability in the data, or both. Measures of Central Tendency Measures of central tendency assist us to describe the extent to which the data collected hang together. If there has an odd number of values for example 13 , then the middle value at No.

It is the mathematical average. Measures of Variability These measures assist us to capture the extent to which the data collected are dispersed. They indicate, in other words, how much variation there is in the values of the variable in question.

The tools here include; a The Range: This is the highest value of the variable minus the lowest value b and c Variance and Standard Deviation: These are related measures which express the degree of variation within a variable on the basis of the average deviation from the mean. The standard deviation is the square root of the variance. The mean is designated with an x Third, compute deviations from the mean. The sum of these deviations from the mean equals zero.

This is what is called inferential analysis. The two basic types of statistical tools, identified by what they enable us to do, are: a Statistical Tests of Significance. They help to answer the question: is there a significant relationship between variables under consideration beyond statistical doubt, or is there a significant difference in the attributes of groups being compared?

For nominal level data, a good test for significant relationship is the Chi-Square x2 Test of Independence. This helps to establish beyond statistical doubt whether the variables in question are independent of one another.

If they are, then there is no relationship. If they are not, and they do relate, it establishes whether the relationship is significant beyond statistical doubt. It is to be noted that, no matter the value of Chi-Square, this test cannot establish how strong the relationship in question is — it can only ascertain whether there is a significant relationship. For ratio level data, an example of a statistical test of significance is the Pearson Product Moment Correlation r test.

These tests help to answer the question: If there is a significant relationship, how strong is that relationship?

In the same vein, an appropriate test of association for ratio-level data is the Coefficient of Determination r2, which is equally a derivative being the square of the Pearson Product Moment Correlation r test. It shows how much of the dependent variable is determined by the independent variable. Let us now lay down the steps in conducting these tests.

This is the chi-square value for each cell. To do this, you require to first establish: - the degree of freedom, given by number of columns in the table minus one multiplied by number of row minus one r — 1 c — 1. If the computed x2 is greater than the critical x2, there is significance and the research hypothesis, to the effect that there is significant relationship between the variables under investigation, is upheld.

If computed x2 is less than critical x2, then the research hypothesis is not upheld, and there is no significant relationship. To get coefficient of determination r2, square r. Its values range from 0, indicating zero determination, to 1, indicating percent determination.

In the case of supervised work at the Defence College, institutional factors must take precedent over other considerations. It does appear that a five-chapter format can be recommended against the background of this tradition, basically boiled down to the following: 1.

Introduction 2. Methods Chapter 4. Finally, participants must remember to indicate their notes and references as well as bibliography as directed by the Defence College. Rudestam and Newton, Chapters 4 — 7.

Data analysis is the language of research. In many fields, research is critical for human progress. Therefore, as long as there is research, there will be the need for data analysis. This guide is useful for users of SPSS versions 8, 9, 10 and These four versions are similar, although there are some differences because of additional functions in the higher version.

It is recommended that you use this guide whilst sitting at a computer that is running SPSS. This guide is designed to help you analyze data on your own. A basic knowledge of statistics and a general acquaintance with the use of the computer is, however, required for this guide to be effective in guiding you on how to conduct analysis with SPSS.

The type of procedure to use and what the output mean will be meaningful provided that you have at least a rudimentary knowledge of statistics. Undoubtedly, the guide should provide individuals with limited statistical background ways of using the SPSS to conduct statistical operations.

An overview of the structure of this guide follows. Firstly, data analysis is considered by introducing you to the windows and buttons you will use when analyzing your data with SPSS. Here, how to start and exit SPSS, create and save a data file and how to obtain some simple descriptive statistics are described. Theses and Dissertations. Methods of ResearchFundamental Concepts and Application. Study III. Researching And Writing A Dissertation Guidebook For Methods of research and thesis writing by jose f calderon… The pardons of research and end that is written by one JoseCalderon rencontres at the banal methods of test.

Laid January 22, , from httpwww. For typing and printing instructions please refer to the illustrative example in Figure 5. Basic Concepts and Methods In Research. Principles and Methods of Research.

Nerza A. Laurentina Paler and Calmorin. Melchor A. Human Factors 3. Attendees will be provided with a list of methods, mixed methods. Next, four groups of researchers, who attended the workshop, will summarize their research, from research plan to results. Each group will discuss 1. SUMMARY their work and respond briefly to questions see sections for This special session will recap qualitative research design and descriptions of the projects that will be presented.

Additional analysis as discussed during an NSF-sponsored two-part time for questions will be given at the end of the session. Several workshop participants will illustrate the use of qualitative methods by describing their research projects.

They 4. Laurie Murphy, Renee McCauley, Brian Hanks, Sue Fitzgerald Attendees will be encouraged to ask questions about how Our project focuses on an important aspect of pair programming qualitative methods can be used in their own research projects.

Debugging is difficult for novices and the dialog pairs naturally engage in presents an ideal context for 2. To gain a better understanding of the nature The primary objective of this session is to provide the SIGCSE of that dialog, particularly the differences between successful and attendees with information on qualitative research methods and to unsuccessful pairs, we used the qualitative research method demonstrate their applicability to computer science education discourse analysis.

Attendees will see examples of projects which use We based our initial analysis on an earlier study [4] that found a common qualitative research methodologies and will have an link between transactive discussion and successful learning in opportunity to ask questions. Transactive discourse analysis Qualitative methods focus on uncovering the motivations and involves coding statements into the following categories: understandings of the participants.

Today, education research has feedback request, paraphrase, justification request, juxtaposition, evolved to include qualitative as well as quantitative explorations. Common types of qualitative research include grounded theory, We collected audio, video and screen capture data from five pairs ethnography, phenomenology, narrative research, and case studies of university-level introductory programming students as they [2].

Qualitative research data is often collected via interviews, debugged Java programs. Results are discussed in [3] and observations, diaries or transcripts. In qualitative approaches, the revealed that extensions, feedback requests, critiques and completions were the most frequently observed types of transactions, while other types were rarely detected. By interviewing students for their This qualitative study used a systematic approach to observe and first-hand perspectives, we identify ways educational institutions explain the complex process of planning among novice can help students make informed decisions about majoring in CS.



0コメント

  • 1000 / 1000