Get help from the best in academic writing.

Jean Watson’s Theory of Human Caring is a conceptual thread in the USU College of Nursing’s curriculum framework. The

Jean Watson’s Theory of Human Caring is a conceptual thread in the USU College of Nursing’s curriculum framework. The purpose of this assignment is to offer students the opportunity to be exposed to Human Caring Science while providing students with the skills of critical appraisal of evidence.

Students will select one nursing research article that focuses on a study that used Jean Watson’s Theory of Human Caring as a theoretical framework.
Students should use as a guide, an appropriate Rapid Critical Appraisal Checklist found in Melnyk

O’Leary, Z. (2005). Researching real-world problems. Thousand Oaks, CA: SAGE. Ch.11 Analysing

O’Leary, Z. (2005). Researching real-world problems. Thousand Oaks, CA: SAGE.


Analysing and Interpreting Data


It’s easy to fall into the trap of thinking the major hurdle in conducting real-world research is data collection. And yes, gathering credible data is certainly a challenge – but so is making sense of it. As George Eliot states, the key to meaning is ‘interpretation’.

Now attempting to interpret a mound of data can be intimidating. Just looking at it can bring on a nasty headache or a mild anxiety attack. So the question is, what is the best way to make a start? How can you begin to work through your data?

Well, if I were only allowed to give one piece of advice, it would be to engage in creative and inspired analysis using a methodical and organized approach. As described in Box 11.1, the best way to move from messy, complex and chaotic raw data … towards rich, meaningful and eloquent understandings is by working through your data in ways that are creative, yet managed within a logical and systematic framework.

Box 11.1 Balancing Creativity and Focus


Think outside the square … yet stay squarely on target

Be original, innovative, and imaginative … yet know where you want to go

Use your intuition … but be able to share the logic of that intuition

Be fluid and flexible … yet deliberate and methodical

Be inspired, imaginative and ingenious … yet realistic and practical


Easier said than done, I know. But if you break the process of analysis down into a number of defined tasks, it’s a challenge that can be conquered. For me, there are five tasks that need to be managed when conducting analysis:


Keeping your eye on the main game. This means not getting lost in a swarm of numbers and words in a way that causes you to lose a sense of what you’re trying to accomplish.

Managing, organizing, preparing and coding your data so that it’s ready for your intended mode(s) of analysis.

Engaging in the actual process of analysis. For quantified data, this will involve some level of statistical analysis, while working with words and images will require you to call on qualitative data analysis strategies.

Presenting data in ways that capture understandings, and being able to offer those understandings to others in the clearest possible fashion.

Drawing meaningful and logical conclusions that flow from your data and address key issues.

This chapter tackles each of these challenges in turn.

Keeping your eye on the main game

While the thought of getting into your data can be daunting, once you take the plunge it’s actually quite easy to get lost in the process. Now this is great if ‘getting lost’ means you are engaged and immersed and really getting a handle on what’s going on. But getting lost can also mean getting lost in the tasks, that is, handing control to analysis programs, and losing touch with the main game. You need to remember that while computer programs might be able to do the ‘tasks’, it is the researcher who needs to work strategically, creatively and intuitively to get a ‘feel’ for the data; to cycle between data and existing theory; and to follow the hunches that can lead to sometimes unexpected, yet significant findings.


Have a look at Figure 11.1. It’s based on a model I developed a while ago that attempts to capture the full ‘process’ of analysis; a process that is certainly more complex and comprehensive than simply plugging numbers or words into a computer. In fact, real-world analysis involves staying as close to your data as possible – from initial collection right through to drawing final conclusions. And as you move towards these conclusions, it’s essential that you keep your eye on the game in a way that sees you consistently moving between your data and … your research questions, aims and objectives, theoretical underpinnings and methodological constraints. Remember, even the most sophisticated analysis is worthless if you’re struggling to grasp the implications of your findings to your overall project.

Rather than relinquish control of your data to ‘methods’ and ‘tools’, thoughtful analysis should see you persistently interrogating your data, as well as the findings that emerge from that data. In fact, as highlighted in Box 11.2, keeping your eye on the game means asking a number of questions throughout the process of analysis.

Box 11.2 Questions for Keeping the Bigger Picture in Mind


Questions related to your own expectations


What do I expect to find i.e. will my hypothesis bear out?

What don’t I expect to find, and how can I look for it?

Can my findings be interpreted in alternative ways? What are the implications?

Questions related to research question, aims and objectives


How should I treat my data in order to best address my research questions?

How do my findings relate to my research questions, aims and objectives?

Questions related to theory


Are my findings confirming my theories? How? Why? Why not?

Does my theory inform/help to explain my findings? In what ways?

Can my unexpected findings link with alternative theories?

Questions related to methods


Have my methods of data collection and/or analysis coloured my results. If so, in what ways?

How might my methodological shortcomings be affecting my findings?

Managing the data

Data can build pretty quickly, and you might be surprised by the amount of data you have managed to collect. For some, this will mean coded notebooks, labelled folders, sorted questionnaires, transcribed interviews, etc. But for the less pedantic, it might mean scraps of paper, jotted notes, an assortment of cuttings and bulging files. No matter what the case, the task is to build or create a ‘data set’ that can be managed and utilized throughout the process of analysis.

Now this is true whether you are working with: (a) data you’ve decided to quantify; (b) data you’ve captured and preserved in a qualitative form; (c) a combination of the above (there can be real appeal in combining the power of words with the authority of numbers). Regardless of approach, the goal is the same – a rigorous and systematic approach to data management that can lead to credible findings. Box 11.3 runs through six steps I believe are essential for effectively managing your data.

Box 11.3 Data Management


Step 1 Familiarize yourself with appropriate software

This involves accessing programs and arranging necessary training. Most universities (and some workplaces) have licences that allow students certain software access, and many universities provide relevant short courses. Programs themselves generally contain comprehensive tutorials complete with mock data sets.

Quantitative analysis will demand the use of a data management/statistics program, but there is some debate as to the necessity of specialist programs for qualitative data analysis. This debate is taken up later in the chapter, but the advice here is that it’s certainly worth becoming familiar with the tools available.


Quantitative programs Qualitative programs

SPSS – sophisticated and user-friendly (

SAS – often an institutional standard, but many feel it is not as user-friendly as SPSS (

Minitab – more introductory, good for learners/small data sets (

Excel – while not a dedicated stats program it can handle the basics and is readily available on most PCs (Microsoft Office product)

Absolutely essential: here is an up-to-date word processing package


Specialist packages include:

NU*DIST, NVIVO, MAXqda, The Ethnograph – used for indexing, searching and theorizing

ATLAS.ti – can be used for images as well as words

CONCORDANCE, HAMLET, DICTION – popular for content analysis (all above available:


CLAN-CA popular for conversation analysis (

Step 2 Log in your data

Data can come from a number of sources at various stages throughout the research process, so it’s well worth keeping a record of your data as it’s collected. Keep in mind that original data should be kept for a reasonable period of time; researchers need to be able to trace results back to original sources.

Step 3 Organize your data

This involves grouping like sources, making any necessary copies and conducting an initial cull of any notes, observations, etc. not relevant to the analysis.

Step 4 Screen your data for any potential problems

This includes a preliminary check to see if your data is legible and complete. If done early, you can uncover potential problems not picked up in your pilot/trial, and make improvements to your data collection protocols.

Step 5 Enter the data

This involves systematically entering your data into a database or analysis program, as well as creating codebooks, which can be electronically based, that describe your data and keep track of how it can be accessed.


Quantitative data Qualitative data

Codebooks often include: the respondent or group; the variable name and description; unit of measurement; date collected; any relevant notes Codebooks often include: respondents; themes; data collection procedures; collection dates; commonly used shorthand; and any other notes relevant to the study

Data entry: data can be entered as it is collected or after it has all come in. Analysis does not take place until after data entry is complete. Figure 11.2 depicts an SPSS data entry screen Data entry: whether using a general word processing program or specialist software, data is generally transcribed in an electronic form and is worked through as it is received. Analysis tends to be ongoing and often begins before all the data has been collected/entered


Step 6 Clean the data

This involves combing through the data to make sure any entry errors are found, and that the data set looks in order.

 Quantitative data

When entering quantified data it’s easy to make mistakes – particularly if you’re moving fast, i.e. typos. It’s essential that you go through your data to make sure it’s as accurate as possible

Qualitative data

Because qualitative data is generally handled as it’s collected, there is often a chance to refine processes as you go. In this way your data can be as ‘ready’ as possible for analysis



‘Doctors say that Nordberg has a 50/50 chance of living, though there’s only a 10 percent chance of that.’

– Naked Gun


It wasn’t long ago that ‘doing’ statistics meant working with formulae, but personally, I don’t believe in the need for all real-world researchers to master formulae. Doing statistics in the twenty-first century is more about your ability to use statistical software than your ability to calculate means, modes, medians and standard deviations – and look up p-values in the back of a book. To say otherwise is to suggest that you can’t ride a bike unless you know how to build one. What you really need to do is to learn how to ride, or in this case learn how to run a stats program.

Okay, I admit these programs do demand a basic understanding of the language and logic of statistics. And this means you will need to get your head around (1) the nature of variables; (2) the role and function of both descriptive and inferential statistics; (3) appropriate use of statistical tests; and (4) effective data presentation. But if you can do this, effective statistical analysis is well within your grasp.

Now before I jump in and talk about the above a bit more, I think it’s important to stress that …

Very few students can get their heads around statistics without getting into some data.

While this chapter will familiarize you with the basic language and logic of statistics, it really is best if your reading is done in conjunction with some hands-on practice (even if this is simply playing with the mock data sets provided in stats programs). For this type of knowledge ‘to stick’, it needs to be applied.


Understanding the nature of variables is essential to statistical analysis. Different data types demand discrete treatment. Using the appropriate statistical measures to both describe your data and to infer meaning from your data requires that you clearly understand your variables in relation to both cause and effect and measurement scales.

Cause and effect

The first thing you need to understand about variables relates to cause and effect. In research-methods-speak, this means being able to clearly identify and distinguish your dependent and independent variables. Now while understanding the theoretical difference is not too tough, being able to readily identify each type comes with practice.

DEPENDENT VARIABLES These are the things you are trying to study or what you are trying to measure. For example, you might be interested in knowing what factors are related to high levels of stress, a strong income stream, or levels of achievement in secondary school – stress, income and achievement would all be dependent variables.

INDEPENDENT VARIABLES These are the things that might be causing an effect on the things you are trying to understand. For example, conditions of employment might be affecting stress levels; gender may have a role in determining income; while parental influence may impact on levels of achievement. The independent variables here are employment conditions, gender and parental influence.

One way of identifying dependent and independent variables is simply to ask what depends on what. Stress depends on work conditions or income depends on gender. As I like to tell my students, it doesn’t make sense to say gender depends on income unless you happen to be saving for a sex-change operation!

Measurement scales

Measurement scales refer to the nature of the differences you are trying to capture in relation to a particular variable (examples below). As summed up in Table 11.1, there are four basic measurement scales that become respectively more precise: nominal, ordinal, interval and ratio. The precision of each type is directly related to the statistical tests that can be performed on them. The more precise the measurement scale, the more sophisticated the statistical analysis you can do.

NOMINAL Numbers are arbitrarily assigned to represent categories. These numbers are simply a coding scheme and have no numerical significance (and therefore cannot be used to perform mathematical calculations). For example, in the case of gender you would use one number for female, say 1, and another for male, 2. In an example used later in this chapter, the variable ‘plans after graduation’ is also nominal with numerical values arbitrarily assigned as 1 = vocational/technical training, 2 = university, 3 = workforce, 4 = travel abroad, 5 = undecided and 6 = other. In nominal measurement, codes should not overlap (they should be mutually exclusive) and together should cover all possibilities (be collectively exhaustive). The main function of nominal data is to allow researchers to tally respondents in order to understand population distributions.

ORDINAL This scale rank orders categories in some meaningful way – there is an order to the coding. Magnitudes of difference, however, are not indicated. Take for example, socio-economic status (lower, middle, or upper class). Lower class may denote less status than the other two classes but the amount of the difference is not defined. Other examples include air travel (economy, business, first class), or items where respondents are asked to rank order selected choices (biggest environmental challenges facing developed countries). Likert-type scales, in which respondents are asked to select a response on a point scale (for example, ‘I enjoy going to work’: 1 = strongly disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = strongly agree), are ordinal since a precise difference in magnitude cannot be determined. Many researchers, however, treat Likert scales as interval because it allows them to perform more precise statistical tests. In most small-scale studies this is not generally viewed as problematic.

INTERVAL In addition to ordering the data, this scale uses equidistant units to measure difference. This scale does not, however, have an absolute zero. An example here is date – the year 2006 occurs 41 years after the year 1965, but time did not begin in AD1. IQ is also considered an interval scale even though there is some debate over the equidistant nature between points.

RATIO Not only is each point on a ratio scale equidistant, there is also an absolute zero. Examples of ratio data include age, height, distance and income. Because ratio data are ‘real’ numbers all basic mathematical operations can be performed.

Descriptive statistics

Descriptive statistics are used to describe the basic features of a data set and are key to summarizing variables. The goal is to present quantitative descriptions in a manageable and intelligible form. Descriptive statistics provide measures of central tendency, dispersion and distribution shape. Such measures vary by data type (nominal, ordinal, interval, ratio) and are standard calculations in statistical programs. In fact, when generating the example tables for this section, I used the statistics program SPSS. After entering my data, I generated my figures by going to ‘Analyze’ on the menu bar, clicking on ‘Descriptive Statistics’, clicking on ‘Frequencies’, and then defining the statistics and charts I required.


Measuring central tendency

One of the most basic questions you can ask of your data centres on central tendency. For example, what was the average score on a test? Do most people lean left or right on the issue of abortion? Or what do most people think is the main problem with our health care system? In statistics, there are three ways to measure central tendency (see Table 11.2): mean, median and mode – and the example questions above respectively relate to these three measures. Now while measures of central tendency can be calculated manually, all stats programs can automatically calculate these figures.

MEAN The mathematical average. To calculate the mean, you add the values for each case and then divide by the number of cases. Because the mean is a mathematical calculation, it is used to measure central tendency for interval and ratio data, and cannot be used for nominal or ordinal data where numbers are used as ‘codes’. For example, it makes no sense to average the 1s, 2s and 3s that might be assigned to Christians, Buddhists and Muslims.

MEDIAN The mid-point of a range. To find the median you simply arrange values in ascending (or descending) order and find the middle value. This measure is generally used in ordinal data, and has the advantage of negating the impact of extreme values. Of course, this can also be a limitation given that extreme values can be significant to a study.

MODE The most common value or values noted for a variable. Since nominal data is categorical and cannot be manipulated mathematically, it relies on mode as its measure of central tendency.

Measuring dispersion

While measures of central tendency are a standard and highly useful form of data description and simplification, they need to be complemented with information on response variability. For example, say you had a group of students with IQs of 100, 100, 95 and 105, and another group of students with IQs of 60, 140, 65 and 135, the central tendency, in this case the mean, of both groups would be 100. Dispersion around the mean, however, will require you to design curriculum and engage learning with each group quite differently. There are several ways to understand dispersion, which are appropriate for different variable types (see Table 11.3). As with central tendency, statistics programs will automatically generate these figures on request.

RANGE This is the simplest way to calculate dispersion, and is simply the highest minus the lowest value. For example, if your respondents ranged in age from 8 to 17, the range would be 9 years. While this measure is easy to calculate, it is dependent on extreme values alone, and ignores intermediate values.

QUARTILES This involves subdividing your range into four equal parts or ‘quartiles’ and is a commonly used measure of dispersion for ordinal data, or data whose central tendency is measured by a median. It allows researchers to compare the various quarters or present the inner 50% as a dispersion measure. This is known as the inner-quartile range.

VARIANCE This measure uses all values to calculate the spread around the mean, and is actually the ‘average squared deviation from the mean’. It needs to be calculated from interval and ratio data and gives a good indication of dispersion. It’s much more common, however, for researchers to use and present the square root of the variance which is known as the standard deviation.

STANDARD DEVIATION This is the square root of the variance, and is the basis of many commonly used statistical tests for interval and ratio data. As explained below, its power comes to the fore with data that sits under a normal curve.

Measuring the shape of the data

To fully understand a data set, central tendency and dispersion need to be considered in light of the shape of the data, or how the data is distributed. As shown in Figure 11.3, a normal curve is ‘bell-shaped’; the distribution of the data is symmetrical, with the mean, median and mode all converged at the highest point in the curve. If the distribution of the data is not symmetrical, it is considered skewed. In skewed data the mean, median and mode fall at different points.

Kurtosis characterizes how peaked or flat a distribution is compared to ‘normal’. Positive kurtosis indicates a relatively peaked distribution, while negative kurtosis indicates a flatter distribution.

The significance in understanding the shape of a distribution is in the statistical inferences that can be drawn. As shown in Figure 11.4, a normal distribution is subject to a particular set of rules regarding the significance of a standard deviation. Namely that:

 68.2% of cases will fall within one standard deviation of the mean

95.4% of cases will fall within two standard deviations of the mean

99.6% of cases will fall within three standard deviations of the mean

So if we had a normal curve for the sample data relating to ‘age of participants’ (mean = 12.11, s.d. = 2.22 – see Boxes 11.2, 11.3), 68.2% of participants would fall between the ages of 9.89 and 14.33 (12.11–2.22 and 12.11+2.22).

These rules of the normal curve allow for the use of quite powerful statistical tests and are generally used with interval and ratio data (sometimes called parametric tests). For data that does not follow the assumptions of a normal curve (nominal and ordinal data), the researcher needs to call on non-parametric statistical tests in making inferences.

Table 11.4 shows the curve, skewness and kurtosis of our sample data set.

Inferential statistics

While the goal of descriptive statistics is to describe and summarize, the goal of inferential statistics is to draw conclusions that extend beyond immediate data. For example, inferential statistics can be used to estimate characteristics of a population from sample data, or to test various hypotheses about the relationship between different variables. Inferential statistics allow you to assess the probability that an observed difference is not just a fluke or chance finding. In other words, inferential statistics is about drawing conclusions that are statistically significant.

Statistical significance

Statistical significance refers to a measure, or ‘p-value’, which assesses the actual ‘probability’ that your findings are more than coincidental. Conventional p-values are .05, .01, and .001, which tells you that the probability your findings have occurred by chance is 5/100, 1/100, or 1/1,000 respectively. Basically, the lower the p-value, the more confident researchers can be that findings are genuine. Keep in mind that researchers do not usually accept findings that have a p-value greater than .05 because the probability that findings are coincidental or caused by sampling error is too great.

Questions suitable to inferential statistics

It’s easy enough to tell students and new researchers that they need to interrogate their data, but it doesn’t tell them what they should be asking. Box 11.4 offers some common questions which, while not exhaustive, should give you some ideas for interrogating real-world data using inferential statistics.

Box 11.4 Questions for Interrogating Quantitative Data using Inferential Statistics



How do participants in my study compare to a larger population? These types of question compare a sample with a population. For example, say you are conducting a study of patients in a particular coronary care ward. You might ask if the percentage of males or females in your sample, or their average age, or their ailments are statistically similar to coronary care patients across the country. To answer such questions you will need access to population data for this larger range of patients.

Are there differences between two or more groups of respondents? Questions that compare two or more groups are very common and are often referred to as ‘between subject’. I’ll stick with a medical theme here … For example, you might ask if male and female patients are likely to have similar ailments; or whether patients of different ethnic backgrounds have distinct care needs; or whether patients who have undergone different procedures have different recovery times.

Have my respondents changed over time?

These types of question involve before and after data with either the same group of respondents or respondents who are matched by similar characteristics. They are often referred to as ‘within subject’. An example of this type of question might be, ‘have patients’ dietary habits changed since undergoing bypass surgery?’

Is there a relationship between two or more variables?

These types of question can look for either correlations (simply an association) or cause and effect. Examples of correlation questions might be, ‘Is there an association between time spent in hospital and satisfaction with nursing staff?’ or, ‘Is there a correlation between patient’s age and the medical procedure they have undergone?’ Questions looking for cause and effect differentiate dependent and independent variables. For example, ‘Does satisfaction depend on length of stay?’ or, ‘Does stress depend on adequacy of medical insurance?’ Cause and effect relationships can also look to more than one independent variable to explain variation in the dependent variable. For example, ‘Does satisfaction with nursing staff depend on a combination of length of stay, age and severity of medical condition?’

(I realize that all of these examples are drawn from the medical or nursing fields, but application to other respondent groups is pretty straightforward. In fact, a good exercise here is to try to come up with similar types of question for alternative respondent groups.)

Selecting the right statistical test

There is a baffling array of statistical tests out there that can help you answer the types of question highlighted in Box 11.4. And programs such as SPSS and SAS are capable of running such tests without you needing to know the technicalities of their mathematical operations. The problem, however, is knowing which test is right for your particular application. Luckily, you can turn to a number of test selectors now available on the Internet (see Bill Trochim’s test selector at and through programs such as MODSTAT and SPSS.

But even with the aid of such selectors (including the tabular one I offer below), you still need to know the nature of your variables (independent/dependent); scales of measurement (nominal, ordinal, interval, ratio); distribution shape (normal or skewed); the types of questions you want to ask; and the types of conclusions you are trying to draw.

Table 11.5 covers the most common tests for univariate (one variable), bivariate (two variable) and multivariate (three or more variable) data. The table can be read down the first column for univariate data (the column provides an example of the data type, its measure of central tendency, dispersion and appropriate tests for comparing this type of variable to a population). It can also be read as a grid for exploring the relationship between two or more variables. Once you know what tests to conduct, your statistical software will be able to run the analysis and assess statistical significance.

Presenting quantitative data

When it comes to presenting quantitative data, there can be a real temptation to offer graphs, charts and tables for every single variable in your study. So the first key to effective data presentation is to resist this temptation, and actively determine what is most important in your work. Your findings need to tell a story related to your aims, objectives and research questions.

Now when it comes to how your data should be presented, I think there is one golden rule: it should not be hard work for the reader. Most people’s eyes glaze over when it comes to statistics, so your data should not be hard to decipher. You should not need to be a statistician to understand it. Your challenge is to graphically and verbally present your data so that meanings are clear. Any graphs and tables you present should ease the task for the reader. So while you need to include adequate information, you don’t want to go into information overload. Box 11.5 covers the basics of graphic presentation, while Box 11.6 looks at the presentation of quantitative data in tabular form.



‘Not everything that can be counted counts, and not everything that counts can be counted.’

– Albert Einstein


I’d always thought of Einstein as an archetypal ‘scientist’. But I’ve come to find that he is archetypal only if this means scientists are extraordinarily witty, insightful, political, creative and open-minded. Which, contrary to the stereotype, is exactly what I think is needed for groundbreaking advances in science. So when Einstein himself recognizes the limitations of quantification, it is indeed a powerful endorsement for working with qualitative data.

Yes, using statistics is a clearly defined and effective way of reducing and summarizing data. But statistics rely on the reduction of meaning to numbers, and there are two concerns here. First, meanings can be both intricate and complex, making it difficult to reduce them to numbers. Second, even with such a reduction, there can be a loss of ‘richness’ associated with the process.

These two concerns have led to the development of a plethora of qualitative data analysis (QDA) approaches that aim to create new understandings by exploring and interpreting complex data from sources such as interviews, group discussions, observation, journals, archival documents etc., without the aid of quantification. But the literature related to these approaches is quite thick, and wading through it in order to find appropriate and effective strategies can be a real challenge. Many students end up: (1) spending a huge amount of time attempting to work through the vast array of approaches and associated literature; (2) haphazardly selecting one method that may or may not be appropriate to their project; (3) conducting their analysis without any well-defined methodological protocols; or (4) doing a combination of the above.

So while we know that there is inherent power in words and images, the challenge is working through options for managing and analysing qualitative data that best preserve richness yet crystallize meaning. And I think the best way to go about this is to become familiar with both the logic and methods that underpin most QDA strategies. Once this foundation is set, working through more specific, specialist QDA strategies becomes much easier.

Logic and methods

Given that we have to make sense of complex, messy and chaotic qualitative data in the real-world everyday, you wouldn’t think it would be too hard to articulate a rigorous QDA process. But the analysis we do on a day-to-day basis tends to be at the subconscious level, and is a process so full of rich subtleties (and subjectivities) that it is actually quite difficult to articulate and formalize.

There is some consensus, however, that the best way to move from raw qualitative data to meaningful understanding is through data immersion that allows you to uncover and discover themes that run through the raw data, and by interpreting the implication of those themes for your research project.

Discovering and uncovering

As highlighted in Figure 11.5, moving from raw data, such as transcripts, pictures, notes, journals, videos, documents, etc., to meaningful understanding is a process reliant on the generation/exploration of relevant themes; and these themes can either be discovered or uncovered. So what do I mean by this?

Well, you may decide to explore your data inductively from the ground up. In other words, you may want to explore your data without a predetermined theme or theory in mind. Your aim might be to discover themes and eventuating theory by allowing them to emerge from the data. This is often referred to as the production of grounded theory or ‘theory that was derived from data systematically gathered and analyzed through the research process’ (Strauss and Corbin 1998, p. 12).

In order to generate grounded theory, researchers engage in a rigorous and iterative process of data collection and ‘constant comparative’ analysis that finds raw data brought to increasingly higher levels of abstraction until theory is generated. This method of theory generation (which shares the same name as its product – grounded theory) has embedded within it very well-defined and clearly articulated techniques for data analysis (see readings at the end of the chapter). And it is precisely this clear articulation of grounded theory techniques that have seen them become central to many QDA strategies.

It is important to realize, however, that discovering themes is not the only QDA option. You may have predetermined (a priori) themes or theory in mind – they might have come from engagement with the literature; your prior experiences; the nature of your research question; or from insights you had while collecting your data. In this case, you are trying to deductively uncover data that supports predetermined theory. In a sense, you are mining your data for predetermined categories of exploration in order to support ‘theory’. Rather than theory emerging from raw data, theory generation depends on progressive verification.

While grounded theory approaches are certainly a mainstay in QDA, researchers who only engage in grounded theory literature can fall prey to the false assumption that all theory must come inductively from data. This need not be the case. The need to generate theory directly from data will not be appropriate for all researchers, particularly those wishing to test ‘a priori’ theories or mine their data for predetermined themes.

Mapping themes

Whether themes are to be discovered or uncovered, the key to QDA is rich engagement with the documents, transcripts, images, texts, etc. that make up a researcher’s raw data. So how do you begin to engage with data in order to discover and uncover themes in what is likely to be an unwieldy raw data set?

Well one way to look at it might be as a rich mapping process. Technically, when deductively uncovering data related to ‘a priori’ themes the map would be predetermined. However, when inductively discovering themes using a grounded theory approach the map would be built as you work through your data. In practice, however, the distinction is unlikely to be that clear, and you will probably rely on both strategies to build the richest map possible.

Figure 11.6 offers a map exploring poor self-image in young girls built through both inductive and deductive processes. That is, some initial ideas were noted, but other concepts were added and linked as data immersion occurred.

It’s also worth noting that this type of mind map can be easily converted to a ‘tree structure’ that forms the basis of analysis in many QDA software programs, including NU*DIST (see Figure 11.7).

Delving into data

When it comes to QDA, delving into your data generally occurs as it is collected and involves: (1) reading and re-reading; (2) annotating growing understanding in notes and memos; (3) organizing and coding data; and (4) searching for patterns in a bid to build and verify theories.

The process of organizing and coding can occur at a number of levels and can range from highly structured, quasi-statistical counts to rich, metaphoric interpretations. Qualitative data can be explored for the words that are used; the concepts that are discussed; the linguistic devices that are called upon; and the nonverbal cues noted by the researcher.

EXPLORING WORDS Words can lead to themes through exploration of their repetition, or through exploration of their context and usage (sometimes called key words in context). Specific cultural connotations of particular words can also lead to relevant themes. Patton (2001) refers to this as ‘indigenous categories’, while Strauss and Corbin (1998) refer to it as ‘in vivo’ coding.

To explore word-related themes researchers systematically search a text to find all instances of a particular word (or phrase) making note of its context and meaning. Several software packages, such as DICTION or CONCORDANCE, can quickly and efficiently identify and tally the use of particular words and even present such findings in a quantitative manner.

EXPLORING CONCEPTS Concepts can be deductively uncovered by searching for themes generated from: the literature; the hypothesis/research question; intuitions; or prior experiences. Concepts and themes may also be derived from ‘standard’ social science categories of exploration, for example power, race, class, gender etc. On the other hand, many researchers will look for concepts to emerge inductively from their data without any preconceived notions. With predetermined categories, researchers need to be wary of ‘fitting’ their data to their expectations, and not being able to see alternate explanations. However, purely inductive methods are also subject to bias since unacknowledged subjectivities can impact on the themes that emerge from the data.

To explore concepts, researchers generally engage in line-by-line or paragraph-by-paragraph reading of transcripts, engaging in what grounded theory proponents refer to as ‘constant comparison’. In other words, concepts and meaning are explored in each text and then compared with previously analysed texts to draw out both similarities and disparities (Glaser and Strauss 1967).

EXPLORING LITERARY DEVICES Metaphors, analogies and even proverbs are often explored because of their ability to bring richness, imagery and empathetic understanding to words. These devices often organize thoughts and facilitate understanding by building connections between speakers and an audience. Once you start searching for such literary devices, you’ll find they abound in both the spoken and written word. Qualitative data analysts often use these rich metaphorical descriptions to categorize divergent meanings of particular concepts.

EXPLORING NONVERBAL CUES One of the difficulties in moving from raw data to rich meaning is what is lost in the process. And certainly the tendency in qualitative data collection and analysis is to concentrate on words, rather than the tone and emotive feeling behind the words, the body language that accompanies the words, or even words not spoken. Yet this world of the nonverbal can be central to thematic exploration. If your raw data, notes or transcripts contain non-verbal cues, it can lend significant meaning to content and themes. Exploration of tone, volume, pitch and pace of speech; the tendency for hearty or nervous laughter; the range of facial expressions and body language used; and shifts in any or all of these, can be central in a bid for meaningful understanding.

Looking for patterns and interconnections

Once texts have been explored for relevant themes, the quest for meaningful understanding generally moves to the relationships that might exist between and amongst various themes. For example, you may look to see if the use of certain words and/or concepts is correlated with the use of other words and/or concepts. Or you may explore whether certain words or concepts are associated with a particular range of nonverbal cues or emotive states. You might also look to see if there is a connection between the use of particular metaphors and nonverbal cues. And of course, you may want to explore how individuals with particular characteristics vary on any of these dimensions.

Interconnectivities are assumed to be both diverse and complex and can point to the relationship between conditions and consequences, or how the experiences of the individual relate to more global themes. Conceptualization and abstraction can become quite sophisticated and can be linked to both model and theory building.

QDA software

It wasn’t long ago that QDA was done ‘by hand’ with elaborate filing, cutting, sticky notes, markers, etc. But quality software (as highlighted in Box 11.3) now abounds and ‘manual handling’ is no longer necessary. QDA programs can store, code, index, map, classify, notate, find, tally, enumerate, explore, graph, etc., etc. Basically, they can: (1) do all the things you can do manually, but much more efficiently; and (2) do things that manual handling of a large data set simply won’t allow. And while becoming proficient at the use of such software can mean an investment in time (and possibly money), if you’re working with a large data set you’re likely to get that time back.

Okay … if QDA programs are so efficient and effective, why are they so inconsistently called on by researchers working with qualitative data? Well, I think there are three answers here. First, is a lack of familiarity – researchers may not be aware of the programs, let alone what they can do. Second is that the learning investment is seen as too large and/or difficult. Third, researchers may realize, or decide, that they really don’t want to do that much with their qualitative data; they may just want to use it sparingly to back up a more quantitative study.

My advice? Well, you really need to think through the pros and cons here. If you’re working with a small data set and you can’t see any more QDA in your future, you may not think it will pay to go down this path – manual handling might do the trick. But if you are (a) after truly rigorous qualitative analysis; (b) have to manage a large data set; or (c) see yourself needing to work with qualitative data in the future, it’s probably worth battling the learning curve. Not only is your research process likely to be more rigorous, you will probably save a fair bit of time in the long run.

To get started with QDA software, I would recommend talking to other researchers or lecturers to find out what programs might be most appropriate for your goals and data. I would also have a look at relevant software sites on the Internet (see Box 11.3); there is a lot of information here and some sites even offer trial programs. Finally, I’d recommend that you take appropriate training courses. NU*DIST and NVIVO are both very popular and short course are often easy to find.

Specialist strategies

Up to this point, I’ve been treating QDA as a homogenous approach with underlying logic and methods, and I haven’t really discussed the distinct disciplinary and paradigmatic approaches that do exist. But as mentioned at the start of this section, the literature here is dense, and a number of distinct approaches have developed over the past decades. Each has its own particular goals, theory and methods … and each will have varying levels of applicability to your own research. Now while I would certainly recommend delving into the approaches that resonate with you, it’s worth keeping in mind that you don’t have to adopt just one approach. It is possible to draw insights from various strategies in a bid to evolve an approach that best cycles between your data and your own research agenda.

Table 11.6 may not be comprehensive enough to get you started in any particular branch of qualitative data analysis, but it does provide a comparative summary of some of the more commonly used strategies. You can explore these strategies further by delving into the readings offered at the end of the chapter.

Presenting qualitative data

I don’t think many books adequately cover the presentation of qualitative data, but I think they should. New researchers often struggle with the task and end up falling back on what they are most familiar with, or what they can find in their methods books (which are often quantitatively biased). So while these researchers may only have three cases, five documents, or eight interviews, they can end up with some pseudo-quantitative analysis and presentation that includes pie charts, bar graphs and percentages. For example, they may say 50% feel … and 20% think, when they’re talking about a total of only five people.

Well this isn’t really where the power of qualitative data lies. The power of qualitative data is in the actual words and images themselves – so my advice is to use them. If the goal is the rich use of words – avoid inappropriate quantification, and preserve and capitalize on language.

So how do you preserve, capitalize on and present words and images? Well, I think it’s about story telling. You really have to have a clear message, argument or storyline, and you need to selectively use your words and/or images in a way that gives weight to that story. The qualitative data you present should be pointed, powerful and able to draw your readers in.

Case Study: Sherman Tremaine Case Study: Sherman Tremaine Program Transcript [MUSIC PLAYING]

Jean Watson’s Theory of Human Caring is a conceptual thread in the USU College of Nursing’s curriculum framework. The Writing Assignment Help Case Study: Sherman Tremaine

Case Study: Sherman Tremaine

Program Transcript


DR. MOORE: Good afternoon. I’m Dr. Moore. Want to thank you for coming in for your

appointment today. I’m going to be asking you some questions about your history and

some symptoms. And to get started, I just want to ensure I have the right patient and

chart. So can you tell me your name and your date of birth?

SHERMAN TREMAINE: I’m Sherman Tremaine, and Tremaine is my game game. My

birthday is November 3, 1968.

DR. MOORE: Great. And can you tell me today’s date? Like the day of the week, and

where we are today?

SHERMAN TREMAINE: Use any recent date, and any location is OK.

DR. MOORE: OK, Sherman. What about do you know what month this is?


DR. MOORE: And the day of the week?

SHERMAN TREMAINE: Oh, it’s a Wednesday or maybe a Thursday.

DR. MOORE: OK. And where are we today?

SHERMAN TREMAINE: I believe we’re in your office, Dr. Moore.

DR. MOORE: OK, great. So tell me a little bit about what brings you in today. What

brings you here?

SHERMAN TREMAINE: Well, my sister made me come in. I was living with my mom,

and she died. I was living, and not bothering anyone, and those people– those people,

they just won’t leave me alone.

DR. MOORE: What people?

SHERMAN TREMAINE: The ones outside my window watching. They watch me. I can

hear them, and I see their shadows. They think I don’t see them, but I do. The

government sent them to watch me, so my taxes are high, so high in the sky. Do you

see that bird?

DR. MOORE: Sherman, how long have you saw or heard these people?

Case Study: Sherman Tremaine

© 2021 Walden University, LLC 2

SHERMAN TREMAINE: Oh, for weeks, weeks and weeks and weeks. Hear that– hear

that heavy metal music? They want you to think it’s weak, but it’s heavy.

DR. MOORE: No, Sherman. I don’t see any birds or hear any music. Do you sleep well,


SHERMAN TREMAINE: I try to but the voices are loud. They keep me up for days and

days. I try to watch TV, but they watch me through the screen, and they come in and

poison my food. I tricked them though. I tricked them. I locked everything up in the

fridge. They aren’t getting in there. Can I smoke?

DR. MOORE: No, Sherman. There is no smoking here. How much do you usually


SHERMAN TREMAINE: Well, I smoke all day, all day. Three packs a day.

DR. MOORE: Three packs a day. OK. What about alcohol? When was your last drink?

SHERMAN TREMAINE: Oh, yesterday. My sister buys me a 12-pack, and tells me to

make it last until next week’s grocery run. I don’t go to the grocery store. They play too

loud of the heavy metal music. They also follow me there.

DR. MOORE: What about marijuana?

SHERMAN TREMAINE: Yes, but not since my mom died three years ago.

DR. MOORE: Use any cocaine?

SHERMAN TREMAINE: No, no, no, no, no, no, no. No drugs ever, clever, ever.

DR. MOORE: What about any blackouts or seizures or see or hear things from drugs or


SHERMAN TREMAINE: No, no, never a clever [INAUDIBLE] ever.

DR. MOORE: What about any DUIs or legal issues from drugs or alcohol?

SHERMAN TREMAINE: Never clever’s ever.

DR. MOORE: OK. What about any medication for your mental health? Have you tried

those before, and what was your reaction to them?

SHERMAN TREMAINE: I hate Haldol and Thorazine. No, no, I’m not going to take it.

Risperidone gave me boobs. No, I’m not going to take it. Seroquel, that is OK. But

they’re all poison, nope, not going to take it.

DR. MOORE: OK. So tell me, any blood relatives have any mental health or substance

abuse issues?

Case Study: Sherman Tremaine

© 2021 Walden University, LLC 3

SHERMAN TREMAINE: They say that my dad was crazy with paranoid schizophrenia.

He did in the old state hospital. They gave him his beer there. Can you believe that? Not

like them today. My mom had anxiety.

DR. MOORE: Did any blood relatives commit suicide?

SHERMAN TREMAINE: Oh, no demons there. No, no.

DR. MOORE: What about you? Have you ever done anything like cut yourself, or had

any thoughts about killing yourself or anyone else?

SHERMAN TREMAINE: I already told you. No demons there. Have been in the hospital

three times though when I was 20.

DR. MOORE: OK. What about any medical issues? Do you have any medical


SHERMAN TREMAINE: Ooh, I take metformin for diabetes. Had or I have a fatty liver,

they say, but they never saw it. So I don’t know unless the aliens told them.

DR. MOORE: OK. So who raised you?

SHERMAN TREMAINE: My mom and my sister.

DR. MOORE: And who do you live with now?

SHERMAN TREMAINE: Myself, but my sister’s plotting with the government to change

that. They tapped my phone.

DR. MOORE: OK. Have you ever been married? Are you single, widowed, or divorced?

SHERMAN TREMAINE: I’ve never been married.

DR. MOORE: Do you have any children?


DR. MOORE: OK. What is your highest level of education?

SHERMAN TREMAINE: I went to the 10th grade.

DR. MOORE: And what do you like to do for fun?

SHERMAN TREMAINE: I don’t work, so smoking and drinking pop.

DR. MOORE: OK. Have you ever been arrested or convicted for anything legally?

SHERMAN TREMAINE: No, but they have told me they would. They have told me they

would if I didn’t stop calling 911 about the people outside.

DR. MOORE: OK. What about any kind of trauma as a child or an adult? Like physical,

sexual, emotional abuse.

Case Study: Sherman Tremaine

© 2021 Walden University, LLC 4

SHERMAN TREMAINE: My dad was rough on us until he died.



So thank you for answering those questions for me. Now, let’s talk about how I can best

help you.


Case Study Student Instruction Sheet Assignment points value = 40 Due Date:

Case Study

Student Instruction Sheet

Assignment points value = 40

Due Date: October 23, 2022

The case study “Harbour Community College Food Service Program” is located on Page 240 and 241; of our textbook

Instructions to Students:

Read the case carefully, conduct research as stated, assimilate all facts, and data.

Please write a report covering the following bullet points to address the discussion questions. You are not to simply answer the questions.

Construct a clear and insightful problem statement and identify all underlying issues. Please address the discussion questions at end of the case.

Propose solution(s) that are sensitive to all the identified issues. For each problem, propose solutions, giving a complete rationale / justification.

Evaluate each solution you proposed, providing thorough insightful explanations, feasibility of each solution, and the impact of each solution.

Provide concise yet thorough action-oriented recommendation, justifying why it will solve the problem. Address limitations of the solution(s) and outline recommended future analysis.

You can organize your report so that you address problem, solutions and feasibility issues for each important problem – in separate sections. Use of headers and graphics is strongly encouraged.

Your answers should be concise, concrete, action oriented, and well-supported. You can add text, appendices, graphics, charts, graphs, and exhibits as desired. These can be derived from external sources.

Please limit your narrative to three (3) to four (4) pages, double-spaced, and in APA format. “APA Style Guidelines” is available on D2L.

When you are finished, upload your file to the Dropbox named: Case Study 2.

Ashesh Saraf MGMT 340-50 Case Study# 2 Page 1