Essay on Data Analysis Procedures



Essay on Data Analysis Procedures

Completed interviews were transcribed, coded, and analyzed for consistent themes. According to Weiss (1994), there are certain analytical procedures that are used in every analysis of qualitative interview data. These include sorting the data (i.e. Managing data by coding and sorting of interview materials through making up excerpt files that consist of statements of what we believe the material tells us), local integration (i.e., a process that provides a coherent understanding to excerpt file materials and their coding), and inclusive integration (i.e., segregating the analysis to different areas that result from local integration) In the current study, MAXQDA 12 software was used to search for relevant data, to retrieve, recode, refile, and enumerate coded items, and to relate coded items to one another (Lofland, Snow, Anderson & Lofland, 2006). The process in using this software describes below.

Despite the case study approach being an appropriate option for organizing and reporting qualitative date for this study since individuals (Saudi women) are the primary unit of analysis, the use of interviews as a data collection method made the analytical framework approach involving questions to also prove vital to the process of organizing data. Saudi Arabian women were the focus of the case studies as the study used their lived marriage experiences. The data was organized in different forms of respect through the case studies of the Saudi Arabian women who manifested different forms of marriage dissolution attributes in the way they live their married lives. The analytical framework approach involved organizing the responses to the interviews question by question, particularly for the standardized interviewing format. For instance, in the evaluation of the strengths and weaknesses of marriage that led to or avoided cases of marriage dissolution, responses to these questions were grouped together. According to Patton, it is vital to have backup copies of all data while ensuring that one “master copy” is kept safe (Patton, 2002, p. 441). In this regard, the data was well protected, given that the data collected from the Saudi Arabian women was in bulk and required efficient organization. The plan was to send all research progress to an email address so that it could be accessed anytime, even if the primary data were destroyed or otherwise lost.Essay on Data Analysis Procedures

According to Patton, formulating some “manageable classification or coding scheme” is the initial step of analysis (Patton, 2002, p. 463). The content analysis involved identifying, coding, categorizing, classifying and naming the primary patterns in the data. The prominent content of the observations and interviews was analyzed to ascertain what was significant. The process commenced by reading the interviews and attaching notes or making comments on how the various data might be used. These notes and comments were summarized into different topics. The copy of the topics derived from the classified notes and comments were the indexed copy. Since most of the responses from the interviews were similar in nature, a coding system was devised for related data to ensure that similar notes and comments were recorded in the same category. Once this initial step of establishing a classification system or coding categories was done and recorded, a new reading commenced to start the formal coding in a systematic manner. The initial step was necessary because the responses from the interviews illustrated more than one pattern or theme. Thus, harmonizing the patterns or themes into a suitable classification system or coding categories was vital in accruing pertinent information from the data collected. Moreover, the process of coding and classification built a framework for organizing and describing the data from the interviews. To avoid confusion, different colors were used while coding to ensure that every response was classified correctly.Essay on Data Analysis Procedures

With respect to convergence, the principles of internal homogeneity and external heterogeneity outlined by Patton were applied (Patton, 2002, p. 465). Having a classification system or specific coding categories is vital but not sufficient. The patterns and the consequent categories for convergence and divergence had to be verified. To check for internal homogeneity, the data in the various categories was evaluated to determine their magnitude of relation. In other words, the data was analyzed to find how well each piece fit in a particular category. The verification of external heterogeneity involved studying the categories to ensure the attributes that make them different were vivid and bold. This process was important as it ensured data would not overlap. Overlapping data would have indicated a fault in the classification system. The process of divergence verification was considered the most important, as it ensured that the study did not include data that exceeds the boundaries set for the coding categories or classification system. Moreover, it ensured that the study did not have an overwhelming number of categories that would have complicated the analysis. Thus, despite the urge to include every piece of data from the interviews, the divergence process ensured that results were not falsified by including data that did not fall within the set categories.Essay on Data Analysis Procedures

Determining substantive significance depends on the efficiency of the convergence and divergence verification process. It is common for researchers to make Type I and Type II errors, that is stating something is not significant when it is significant and conversely saying something is significant while in fact it is not. A thorough verification of convergence and divergence was made to avoid Type I or Type II errors in the argument for substantive significance while presenting conclusions and findings.