Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

University Thesis and Dissertation Templates

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

Theses and dissertations are already intensive, long-term projects that require a lot of effort and time from their authors. Formatting for submission to the university is often the last thing that graduate students do, and may delay earning the relevant degree if done incorrectly.

Below are some strategies graduate students can use to deal with institutional formatting requirements to earn their degrees on time.

Disciplinary conventions are still paramount.

Scholars in your own discipline are the most common readers of your dissertation; your committee, too, will expect your work to match with their expectations as members of your field. The style guide your field uses most commonly is always the one you should follow, and if your field uses conventions such as including all figures and illustrations at the end of the document, you should do so. After these considerations are met, move on to university formatting. Almost always, university formatting only deals with things like margins, font, numbering of chapters and sections, and illustrations; disciplinary style conventions in content such as APA's directive to use only last names of authors in-text are not interfered with by university formatting at all.

Use your university's formatting guidelines and templates to your advantage.

If your institution has a template for formatting your thesis or dissertation that you can use, do so. Don't look at another student's document and try to replicate it yourself. These templates typically have the necessary section breaks and styles already in the document, and you can copy in your work from your existing draft using the style pane in MS Word to ensure you're using the correct formatting (similarly with software such as Overleaf when writing in LaTeX, templates do a lot of the work for you). It's also often easier for workers in the offices that deal with theses and dissertations to help you with your work if you're using their template — they are familiar with these templates and can often navigate them more proficiently.

These templates also include placeholders for all front matter you will need to include in your thesis or dissertation, and may include guidelines for how to write these. Front matter includes your table of contents, acknowledgements, abstract, abbreviation list, figure list, committee page, and (sometimes) academic history or CV; everything before your introduction is front matter. Since front matter pages such as the author's academic history and dissertation committee are usually for the graduate school and not for your department, your advisor might not remember to have you include them. Knowing about them well before your deposit date means you won't be scrambling to fill in placeholders at the last minute or getting your work returned for revision from the graduate school.

Consider institutional formatting early and often.

Many graduate students leave this aspect of submitting their projects until it's almost too late to work on it, causing delays in obtaining their degree. Simply being aware that this is a task you'll have to complete and making sure you know where templates are, who you can ask for help in your graduate office or your department, and what your institution's guidelines are can help alleviate this issue. Once you know what you'll be expected to do to convert to university formatting, you can set regular check-in times for yourself to do this work in pieces rather than all at once (for instance, when you've completed a chapter and had it approved by your chair). 

Consider fair use for images and other third-party content.

Most theses and dissertations are published through ProQuest or another publisher (Harvard, for instance, uses their own open publishing service). For this reason, it may be the case that your institution requires all images or other content obtained from other sources to fall under fair use rules or, if an image is not considered under fair use, you'll have to obtain permission to print it in your dissertation. Your institution should have more guidance on their specific expectations for fair use content; knowing what these guidelines are well in advance of your deposit date means you won't have to make last-minute changes or removals to deposit your work.

UCI Libraries Mobile Site

  • Langson Library
  • Science Library
  • Grunigen Medical Library
  • Law Library
  • Connect From Off-Campus
  • Accessibility
  • Gateway Study Center

Libaries home page

Email this link

Thesis / dissertation formatting manual (2024).

  • Filing Fees and Student Status
  • Submission Process Overview
  • Electronic Thesis Submission
  • Paper Thesis Submission
  • Formatting Overview
  • Fonts/Typeface
  • Pagination, Margins, Spacing
  • Paper Thesis Formatting
  • Preliminary Pages Overview
  • Copyright Page
  • Dedication Page
  • Table of Contents
  • List of Figures (etc.)
  • Acknowledgements
  • Text and References Overview
  • Figures and Illustrations
  • Using Your Own Previously Published Materials
  • Using Copyrighted Materials by Another Author
  • Open Access and Embargoes
  • Copyright and Creative Commons
  • Ordering Print (Bound) Copies
  • Tutorials and Assistance
  • FAQ This link opens in a new window

UCI Libraries maintains the following  templates to assist in formatting your graduate manuscript. If you are formatting your manuscript in Microsoft Word, feel free to download and use the template. If you would like to see what your manuscript should look like, PDFs have been provided. If you are formatting your manuscript using LaTex, UCI maintains a template on OverLeaf.

  • Annotated Template (Dissertation) 2024 PDF of a template with annotations of what to look out for
  • Word: Thesis Template 2024 Editable template of the Master's thesis formatting.
  • PDF Thesis Template 2024
  • Word: Dissertation Template 2024 Editable template of the PhD Dissertation formatting.
  • PDF: Dissertation Template 2024
  • Overleaf (LaTex) Template
  • << Previous: Tutorials and Assistance
  • Next: FAQ >>
  • Last Updated: May 31, 2024 9:34 AM
  • URL: https://guides.lib.uci.edu/gradmanual

Off-campus? Please use the Software VPN and choose the group UCIFull to access licensed content. For more information, please Click here

Software VPN is not available for guests, so they may not have access to some content when connecting from off-campus.

  • Dissertation
  • PowerPoint Presentation
  • Book Report/Review
  • Research Proposal
  • Math Problems
  • Proofreading
  • Movie Review
  • Cover Letter Writing
  • Personal Statement
  • Nursing Paper
  • Argumentative Essay
  • Research Paper
  • Discussion Board Post

How to Write Your Dissertation Chapter 3?

Jason Burrey

Table of Contents

In this article, we are going to discuss dissertation chapter 3 , as many students consider it to be the most challenging section to write and for a good reason.

How to Write Your Dissertation Chapter 3

The body of the dissertation research papers is divided into different chapters and sections. The standard dissertation structure may vary from discipline to discipline, but it typically includes sections like:

  • Introduction
  • Literature review
  • Methodology

Each part of the dissertation should have a central idea, which is introduced and argued.

We will provide you with a concise and in-depth overview of chapter 3 methodology to help you get started.

What is dissertation chapter 3 about?

Chapter 3 dissertation outlines specific methods chosen by a writer to research a problem. It’s essential to provide enough information so that an experienced researcher could replicate the study.

You need to explain what techniques were used for data collection and provide an analysis of results to answer your college research paper question. Besides, you need to explain the chosen methods and justify them, describe the research setting, and give a detailed explanation of how you applied those methods in your study.

… How you do that?

  • Start with a clear explanation of approaches used for solving the problem.
  • Describe all the components of methodology in detail.
  • Describe all methods and tell how you used them in your study. Clarify why each particular technique would be the best choice for answering your research question.

Below is the basic outline you can use as a template when writing dissertation methodology section.

How to write AP government chapter 3 outline?

Looking for AP Government chapter 3 outline which provides a college-level introduction to the structure and function of the US government and politics? Keep in mind that it’s not the same thing as a typical outline of the methodology section in your final paper.

Example of outline for chapter 3

  • Introduction , stating the purpose of the part, introducing the methods, and outlining the section’s organization.
  • Research questions , hypothesis, and variables.
  • Research design – describe the investigation approach and justify specific chosen methods, citing relevant literature.
  • Study setting – describe the role of the researcher in gathering data.
  • Study participants and data sources – explain criteria and strategies used when selecting participants and describe systems used for collecting and storing information.
  • Procedures and instruments – demonstrate methods and state each step for performing the study in detail.
  • Data analysis – discuss statistical tools and methods applied to analyze information and measures to increase validity.
  • Summary of the key points.

What is chapter 3 methodology?

When reporting about their new studies, scholars always have to answer 2 main questions:

  • How was the latest information gathered or generated?
  • Which specific techniques and procedures were utilized when analyzing data?

There are loads of different techniques and procedures you can choose to investigate a particular research problem.

Remember: choosing appropriate methodology is critical to the success of any study.

If you select an unreliable technique, it will produce inaccurate results during the interpretation of your findings. That’s not the outcome you want.

There are two groups of primary data collection methods: qualitative and quantitative.

Qualitative research techniques don’t involve any mathematical calculations and numbers.

They are strongly connected with emotions, words, feelings, sounds. Qualitative study ensures in-depth investigation and a greater level of problem understanding.

The qualitative investigation includes interviews, case studies, role-playing, games, observations, focus groups, and questionnaires with open-ended questions.

Quantitative techniques for data collection and analysis are based on mathematical calculations in a variety of forms and statistics.

They include methods of correlation and regression, questionnaires with close-ended questions, median, mode, and mean and procedures.

These procedures are cheaper to apply than qualitative ones. They require less time for implementation. They are highly standardized and, as a result, scientists can easily compare findings.

Wondering which approach to choose to cover your investigation question? It depends on the research area and specific objectives.

Few thoughts on chapter 3 thesis

In chapter 3 thesis, which is written in the same way as methodology part of a dissertation, you discuss how you performed the study in great detail. It usually includes the same elements and has a similar structure.

You can use the outline example of this section for a dissertation but you should take into account that its structure should illustrate the research approach and design of your specific study.

That’s why you should be careful and include only relevant elements into your methodology section.

As you see, dissertation chapter 3 is a very significant part of the lengthy academic paper students write to get their degrees.

It should be written like a recipe so that anyone could adopt your techniques and replicate your investigation.

It requires strong analytical and critical thinking skills, dedication, and many hours of reading and writing.

It’s essential to choose the right approach to selecting and explaining investigation techniques.

We hope that this quick guide will help you create an impressive methodology section of your final academic project.

Not feeling like writing your dissertation chapter 3? How about handing it to a pro? Few clicks, brief instructions, and you’re free. Come on, our writers strive to help you out!

1 Star

How to Write Winning Essays on Honor

chapter 3 thesis template

Where to Start to Get the Right Environment Essay Topic

chapter 3 thesis template

APA Format: Easy Explanations And Samples

  • How it works

researchprospect post subheader

Chapter 3 – Dissertation Methodology (example)

Disclaimer: This is not a sample of our professional work. The paper has been produced by a student. You can view samples of our work here . Opinions, suggestions, recommendations and results in this piece are those of the author and should not be taken as our company views.

Type of Academic Paper – Dissertation Chapter

Academic Subject – Marketing

Word Count – 3017 words

Introduction

The current chapter presents developing the research methods needed to complete the experimentation portion of the current study. The chapter will discuss in detail the various stages of developing the methodology of the current study. This includes a detailed discussion of the philosophical background of the research method chosen. In addition to this, the chapter describes the data collection strategy, including the selection of research instrumentation and sampling. The chapter closes with a discussion on the analysis tools used to analyse the data collected.

Selecting an Appropriate Research Approach

Creswall (2013) stated that research approaches are plans and procedures that range from steps, including making broad assumptions to detailed methods of data collection, analysis, and interpretation. The several decisions involved in the process are used to decide which approach should be used in a specific study that is informed using philosophical assumptions brought to the study (Creswall 2013). Included in this are procedures of inquiry or research designs and specific research methods used for data collection, its analysis, and finally, its interpretation. However, Guetterman (2015); Lewis (2015); and Creswall (2013) argue that the selection of the specific research approach is based on the nature of the research problem, or the issue that is being addressed by any study, personal experiences of the researchers’, and even the audience for which the study is being developed for.

There are many ways to customise research approaches to develop an approach most suited for a particular study. However, the main three categories with which research approaches are organised include; qualitative, quantitative, and mixed research methods. Creswall (2013) comments that all three approaches are not considered so discrete or distinct from one another. Creswall (2013) states, “qualitative and quantitative approaches should not be viewed as rigid, distinct categories, polar opposite, or dichotomies” (p.32). Newmand and Benz (1998) pointed out that quantitative and qualitative approaches instead represent different ends on a continuum since a study “tends” to be more quantitative than qualitative or vice versa. Lastly, mixed methods research resides in the middle of the continuum as it can incorporate elements and characteristics of both quantitative and qualitative approaches. Lewis (2015) points out that the main distinction that is often cited between quantitative and qualitative research is that it is framed in terms of using numbers rather than words; or using closed-ended questions for quantitative hypotheses over open-ended questions for qualitative interview questions. Guetterman (2015) points out that a clearer way of viewing gradations of differences between the approaches is to examine the basic philosophical assumptions brought to the study, the kinds of research strategies used, and the particular methods implemented in conducting the strategies.

Underlying Philosophical Assumptions

An important component of defining the research approach involves philosophical assumptions that contribute to the broad research approach of planning or proposing to conduct research. It involves the intersection of philosophy, research designs and specific methods as illustrated in Fig. 1 below.

Research Onion

Figure 3.2-1- Research Onion (Source; Saunders and Tosey 2013)

Slife and Williams (1995) have argued that philosophical ideas have remained hidden within the research. However, they still play an influential role in the research practice, and it is for this reason that it is most identified. Various philosophical assumptions are used to construct or develop a study. Saunders et al. (2009) define research philosophy as a belief about how data about a phenomenon should be gathered, analysed and used. Saunders et al. (2009) identify common research philosophies such as positivism, realism, interpretivism, subjectivism, and pragmatism. Dumke (2002) believes that two views, positivism and phenomenology, mainly characterise research philosophy.

Positivism reflects acceptance in adopting the philosophical stance of natural scientists (Saunders, 2003). According to Remenyi et al. (1998), there is a greater preference in working with an “observable social reality” and that the outcome of such research can be “law-like” generalisations that are the same as those which are produced by physical and natural scientists. Gill and Johnson (1997) add that it will also emphasise a high structure methodology to allow for replication for other studies. Dumke (2002) agrees and explains

that a positivist philosophical assumption produces highly structured methodologies and allows for generalisation and quantification of objectives that can be evaluated by statistical methods. For this philosophical approach, the researcher is considered an objective observer who should not be impacted by or impact the subject of research.

On the other hand, more phenomenological approaches agree that the social world of business and management is too complex to develop theories and laws similar to natural sciences. Saunders et al. (2000) argue that this is the reason why reducing observations in the real world to simple laws and generalisations produces a sense of reality which is a bit superficial and doesn’t present the complexity of it.

The current study chooses positivistic assumptions due to the literature review’s discussion of the importance of Big Data in industrial domains and the need for measuring its success in the operations of the business. The current study aims to examine the impact that Big Data has on automobile companies’ operations. To identify a positive relationship between Big Data usage and beneficial business outcomes, the theory needs to be used to generate hypotheses that can later be tested of the relationship which would allow for explanations of laws that can later be assessed (Bryman and Bell, 2015).

Selecting Interpretive Research Approach

Interpretive research approaches are derived from the research philosophy that is adopted. According to Dumke (2002), the two main research approaches are deductive and inductive. The inductive approach is commonly referred to when theory is derived from observations. Thus, the research begins with specific observations and measures. It is then from detecting some pattern that a hypothesis is developed. Dumke (2002) argues that researchers who use an inductive approach usually work with qualitative data and apply various methods to gather specific information that places different views. From the philosophical assumptions discussed in the previous section, it is reasonable to use the deductive approach for the current study. It is also considered the most commonly used theory to establish a relationship between theory and research. The figure below illustrates the steps used for the process of deduction.

Data Collection

  • confirmed or rejected
  • Revision of theory

Based on what is known about a specific domain, the theoretical considerations encompassing it a hypothesis or hypotheses are deduced that will later be subjected to empirical enquiry (Daum, 2013). Through these hypotheses, concepts of the subject of interest will be translated into entities that are rational for a study. Researchers are then able to deduce their hypotheses and convert them into operational terms.

Hire an Expert Dissertation Chapter Writer

Orders completed by our expert writers are

  • Formally drafted in an academic style
  • Free Amendments and 100% Plagiarism Free – or your money back!
  • 100% Confidential and Timely Delivery!
  • Free anti-plagiarism report
  • Appreciated by thousands of clients. Check client reviews

chapter 3 thesis template

Justifying the Use of Quantitative Research Method

Saunders (2003) notes that almost all research will involve some numerical data or even contain data quantified to help a researcher answer their research questions and meet the study’s objectives. However, quantitative data refers to all data that can be a product of all research strategies (Bryman and Bell, 2015; Guetterman, 2015; Lewis, 2015; Saunders, 2003). Based on the philosophical assumptions and interpretive research approach, a quantitative research method is the best suited for the current study. Haq (2014) explains that quantitative research is about collecting numerical data and then analysing it through statistical methods to explain a specific phenomenon. Mujis (2010) defends the use of quantitative research because, unlike qualitative research, which argues that there is no pre-existing reality, quantitative assumes that there is only a single reality about social conditions that researchers cannot influence in any way. Also, qualitative research is commonly used when there is little to no knowledge of a phenomenon, whereas quantitative research is used to find the cause and effect relationship between variables to either verify or nullify some theory or hypothesis (Creswall 2002; Feilzer 2010; Teddlie and Tashakkori 2012).

Selecting an Appropriate Research Strategy

There are many strategies available to implement in a study, as evidenced from Fig. 1. There are many mono-quantitative methods, such as telephone interviews, web-based surveys, postal surveys, and structured questionnaires (Haq 2014). Each instrument has its own pros and cons in terms of quality, time, and data cost. Brymand (2006); Driscoll et al. (2007); Edwards et al. (2002); and Newby et al. (2003) note that most researchers use structured questionnaires for data collection they are unable to control or influence respondents, which leads to low response rates but more accurate data obtained. Saunders and Tosey (2015) have argued that quantitative data is simpler to obtain and more concise to present. Therefore, the current study uses a survey-based questionnaire (See Appendix A).

Justifying the use of Survey Based Questionnaire

Surveys are considered the most traditional forms of research and are used in non-experimental descriptive designs that describe some reality. Survey-based questionnaires are often restricted to a representative sample of a potential group of the study’s interest. In this case, it is the executives currently working for automobile companies in the UK. The survey instrument is then chosen for its effectiveness at being practical and inexpensive (Kelley et al., 2003). Due to the philosophical assumptions, interpretive approach, and methodological approach, the survey design for the current study is considered the best instrument in line with these premises, besides being cost-effective.

Empirical Research Methodology

Research design.

This section describes how research is designed to use the techniques used for data collection, sampling strategy, and data analysis for a quantitative method. Before going into the strategies of data collection and analysis, a set of hypotheses were developed.

Hypotheses Development

The current study uses a quantitative research approach, making it essential to develop a set of hypotheses that will be used as a test standard for the mono-method quantitative design. The following are a set of hypotheses that have been developed from the examination of the literature review.

H1- The greater the company’s budget for Big Data initiatives (More than 1 million GBP), the greater its ability to monetise and generate new revenues.

H2- The greater the company’s budget for Big Data initiatives (More than 1 million GBP) the more decrease in expenses in found.

H3- The greatest impact of Big Data on a company is changing the way business is done.

H4- Big Data integrating with a company has resulted in competitive significance.

H5- The analytical abilities of a company allows for achieved measurable results.

H6- Investing in Big Data will lead to highly successful business results.

H7- A business’s operations function is fuelling Big Data initiatives and effecting change in operations.

H8- The implementation of Big Data in the company has positive impacts on business.

This section includes the sampling method used to collect the number of respondents needed to provide information, then analysed after collection.

Sampling Method

Collis (2009) explains that there are many kinds of sampling methods that can be used for creating a specific target sample from a population. This current study uses simple random sampling to acquire respondents with which the survey will be conducted. Simple random sampling is considered the most basic form of probability sampling. Under the method, elements are taken from the population at random, with all elements having an equal chance of being selected. According to () as of 2014, there are about thirty-five active British car manufacturers in the UK, each having an employee population of 150 or more. This is why the total population of employees in car manufacturers is estimated to be 5,250 employees. The sample, therefore, developed used the following equation;

2  ×   (1 −   )

+(   2 × (1−  ) )  2

Where; N is the population size,  e  is the margin of error (as a decimal),  z  is confidence level (as a z-score), and  p  is percentage value (as a decimal). Thus, the sample size is with a normal distribution of 50%. With the above equation, a population of 5,250; with a 95% confidence level and 5% margin of error, the total sample size needed for the current equals 300. Therefore, N=300, which is the sample size of the current study.

The survey develops (see Appendix A) has a total of three sections, A, B, and C, with a total of 39 questions. Each section has its own set of questions to accomplish. The survey is a mix of closed-end questions that look to comprehend the respondents’ demographic makeup, the Big Data initiatives of the company, and the impact that Big Data was having on their company. The survey is designed to take no longer than twenty minutes. The survey was constructed on Survey Monkey.com, and an online survey provided website. The survey was left on the website for a duration of 40 days to ensure that the maximum number of respondents were able to answer the survey. The only way that the survey was allowed for a respondent is if they passed a security question as if they were working for an automobile company in the UK when taking the survey. Gupta et al. (2004) believe that web surveys are visual stimuli, and the respondent has complete control about whether or how each question is read and understood. That is why Dillman (2000) argued that web questionnaires are expected to resemble those taken through the mail/postal services closely.

Data Analysis

The collected data is then analysed through the Statistical Package for Social Science (SPSS) version 24 for descriptive analysis. The demographic section of the survey will be analysed using descriptive statistics. Further analysis of the data includes regression analysis. Simple regression analysis includes only one independent variable and one dependent variable. Farrar and Glauber (1967) assert that the purpose of regression analysis is to estimate the parameters of dependency, and it should not be used to determine the interdependency of a relationship.

Need a Dissertation Chapter On a Similar Topic?

Conclusions.

The chapter provides a descriptive and in-depth discussion of the methods involved in the current study’s research. The current study is looking towards a quantitative approach that considers positivism as its philosophical undertaking, using deductive reasoning for its interpretive approach, is a mono-quantitative method that involves the use of a survey instrument for data collection. The methodology chapter also provided the data analysis technique, which is descriptive statistics through frequency analysis and regression analysis.

Examples of results;

Question 8- Of these staff, are mostly working in or for your consumer-facing (B2C) businesses, your commercial or wholesale (B2B) businesses, or both?

Question 8- Of these staff

Based on the illustration, nineteen (19) respondents indicated that 501-1000 employees are dedicated to analytics for both B2B and B2C. The category of using Big Data analytics for both B2B and B2C comprises the most agreement of respondents with 72 of 132 indicated.

The category of using Big Data analytics

The figure above represents the respondents’ answers to their automobile company’s plan for measuring Big Data’s success. Of the 132 participants, 44.70 per cent responded that the company is planning on using quantitative metrics associated with business performance to analyse if Big Data is actually successful. Another, 30.30 per cent indicated that their company was planning on using qualitative metrics tied to business performance. Using business performance to analyse the success of Big Data is coherent to the results of the literature review that indicated previous studies of doing such. As an automobile company, they need to know the results of using Big Data analytics, and that is only by using business performance indicators regardless of being qualitative or quantitative.

achievement-of-results

Fig. 4.3-6 portrays the response of participants in regards to actually achieving measurable results from Big Data. According to 68.18 per cent of respondents, the company that they worked for did indeed show measurable results from their investments in Big Data. However, 31.82 per cent indicated that there was indeed no measurable result in investing in Big Data.

graph

Bryman, A., Bell, E., 2015. Business Research Methods. Oxford University Press.

Daum, P., 2013. International Synergy Management: A Strategic Approach for Raising Efficiencies in the Cross-border Interaction Process. Anchor Academic Publishing (aap_verlag).

Dümke, R., 2002. Corporate Reputation and its Importance for Business Success: A European

Perspective and its Implication for Public Relations Consultancies. diplom.de.

Guetterman, T.C., 2015. Descriptions of Sampling Practices Within Five Approaches to Qualitative Research in Education and the Health Sciences. Forum Qualitative Sozialforschung /

Forum: Qualitative Social Research 16.

Haq, M., 2014. A Comparative Analysis of Qualitative and Quantitative Research Methods and a Justification for Adopting Mixed Methods in Social Research (PDF Download Available).

ResearchGate 1–22. doi:http://dx.doi.org/10.13140/RG.2.1.1945.8640

Kelley, K., Clark, B., Brown, V., Sitzia, J., 2003. Good practice in the conduct and reporting of survey research. Int J Qual Health Care 15, 261–266. doi:10.1093/intqhc/mzg031

Lewis, S., 2015. Qualitative Inquiry and Research Design: Choosing Among Five Approaches.

Health Promotion Practice 16, 473–475. doi:10.1177/1524839915580941

Saunders, M., 2003. Research Methods for Business Students. Pearson Education India.

Saunders, M.N.K., Tosey, P., 2015. Handbook of Research Methods on Human Resource

Development. Edward Elgar Publishing.

DMCA / Removal Request

If you are the original writer of this Dissertation Chapter and no longer wish to have it published on the www.ResearchProspect.com then please:

Request The Removal Of This Dissertation Chapter

Frequently Asked Questions

How to write methodology chapter of a dissertation.

To write the methodology chapter of a dissertation:

  • Describe research design & approach.
  • Explain data collection methods.
  • Justify chosen methods.
  • Address limitations.
  • Analyse data.
  • Ensure replicability.

USEFUL LINKS

LEARNING RESOURCES

researchprospect-reviews-trust-site

COMPANY DETAILS

Research-Prospect-Writing-Service

  • How It Works

Dissertation

Chapter 3: Method

This chapter presents the methods and research design for this dissertation study. It begins by presenting the research questions and settings, the LibraryThing and Goodreads digital libraries. This is followed by an overview of the mixed methods research design used, incorporating a sequence of three phases. Each of the three methods—qualitative content analysis, a quantitative survey questionnaire, and qualitative interviews—are then presented in detail. The codes and themes used for analysis during the qualitative phases are discussed next. The chapter continues with sections on the management of the research data for this study; the validity, reliability, and trustworthiness of study findings; and ethical considerations. The invitation letters and informed consent statement; survey instrument; interview questions; a quick reference guide used for coding and analysis; and documentation of approval from LibraryThing, Goodreads, and the FSU Human Subjects Committee are included in appendices.

3.1. Research Questions

As stated in Chapter 1 the purpose of this research, taking a social perspective on digital libraries, is to improve understanding of the organizational, cultural, institutional, collaborative, and social contexts of digital libraries. The following two research questions satisfy the purpose of the proposed study within the approach, setting, and framework introduced in Chapter 1 :

  • RQ1: What roles do LibraryThing and Goodreads play, as boundary objects, in translation and coherence between the existing social and information worlds they are used within?
  • RQ2: What roles do LibraryThing and Goodreads play, as boundary objects, in coherence and convergence of new social and information worlds around their use?

These two questions explore the existing and emergent worlds that may surround digital libraries in social, collaborative use and behavior. RQ1 focuses on examining how LibraryThing and Goodreads may support existing collaboration, communities, and other social activities and behaviors across social and information worlds, with a specific eye to translation, characteristics indicating coherence of existing worlds, and uses of the digital libraries as boundary objects. RQ2 focuses on examining how LibraryThing and Goodreads may support coherence and convergence of new, emergent social and information worlds and their characteristics, as indicated by use of the digital libraries (as boundary objects) as new, localized standards. The questions focus on the roles of each digital library, be there one role, multiple roles, or possibly no role played by LibraryThing and Goodreads. These roles may or may not include explicit support for collaboration, communities, or social contexts. The research questions use and incorporate the definitions, concepts, and propositions of social digital libraries (see section 2.4.3 ), the social worlds perspective (see sections 2.7.1.1 and 2.8.1 ), the theory of information worlds (see section 2.8.2 ), and the synthesized theoretical framework for social digital libraries (developed in section 2.8.3 ). Coherence and convergence are seen as the same concept in boundary object theory (see section 2.7.1.4 ), leading to overlap between the concepts—and the two research questions—in operational data collection and analysis. The connotations of the two indicate convergence will lead to new, emergent worlds, and this meaning is indicated by its use in RQ2, but not RQ1.

3.2. Setting: Case Studies of LibraryThing and Goodreads

In this dissertation study, the boundary objects of interest are defined and given as two digital libraries: LibraryThing and Goodreads (see sections 3.2.2 and 3.2.3 below). This approach is opposite the procedure used by Star and Griesemer (1989), who first identified the populations of communities, users, and stakeholders in their study, then examined the boundary objects they used. Starting with the boundary objects is in line with Star’s later work (Bowker & Star, 1999; Star et al., 2003). Bødker and Christiansen (1997); Gal, Yoo, and Boland (2004); Henderson (1991); and Pawlowski, Robey, and Raven (2000) have used this approach to varying extents, proving its validity and usefulness as an approach to take for studying social digital libraries as boundary objects.

3.2.1. Case Study Approach

This research takes a case study approach, where "a detailed" and intensive "analysis of … individual case[s]"—LibraryThing and Goodreads—will be performed (Fidel, 1984, p. 274). The research looked to generate "a comprehensive understanding of the event under study"—uses of these digital libraries as boundary objects within and across existing and emergent social and information worlds—and develop "more general theoretical statements about regularities in the observed phenomena" surrounding social digital libraries (p. 274). Case studies often focus on the cycle of research methods which inform each other through a longer, more detailed research process than using a single exploratory method. A case study approach fosters multiple opportunities to revisit and reanalyze data collected earlier in the study, revise the research design as new facets and factors emerge, and combine multiple methods and data sources into a holistic description of each case. The research design used here, employing two qualitative and one quantitative method in a cycle (see section 3.3 ), follows this approach.

Yin (2003) breaks the process of conducting a case study into five phases. The phases "effectively force [the researcher] to begin constructing a preliminary theory" prior to data collection (p. 28), as done in Chapter 2 . Each of Yin’s five steps can be found in sections of this dissertation. First, one must determine the research questions to be asked; these were included in section 3.1 above. Second, one must identify what Yin calls the "propositions," statements "direct[ing] attention to something that should be examined within the scope of study" (p. 22). The theoretical framework developed earlier (see section 2.8) and the purpose of this research as stated in Chapter 1 provide this necessary focus from a conceptual perspective. The operationalization of this focus is discussed for each method in sections 3.4.4 , 3.5.3 , 3.6.4 , and 3.7 . Third, Yin says one must determine the unit of analysis, based on the research questions. In this study, the overall units of analysis are the two social digital libraries under consideration, LibraryThing and Goodreads; other units of interest include communities, groups, and individuals. The specific unit of analysis for each method of data collection is discussed in sections 3.4.1 , 3.5.1 , and 3.6.2 . Fourth, one must connect "data to [theoretical] propositions," matching patterns with theories (p. 26). Using the theoretical framework developed in section 2.8 in data analysis (see sections 3.4.4 , 3.5.5 , 3.6.6 , and 3.7 ) provides for this matching process. For the final step, Yin says one must determine "the criteria for interpreting [the] findings" (p. 27); the criteria chosen for this research are discussed in the data analysis sections ( 3.4.4 , 3.5.5 , 3.6.6 , and 3.7 ) and are considered in light of concerns of validity, reliability, and trustworthiness ( section 3.9 ) and the benefits ( section 1.7 and Chapter 5 ) and limitations ( section 5.6 ) of the study.

This research employed a multiple-case, "holistic" design at the highest level, focusing on LibraryThing and Goodreads as units, but what Yin (2003, p. 42) calls an "embedded" design, with multiple units of analysis considered in each method, at lower levels. Examining two social digital libraries allows them to be compared and contrasted, but commonalities were expected to emerge—and did—across the two cases to allow theoretical and practical conclusions to be drawn (see Chapter 5 ). Yin stated case study designs must be flexible and may change as a result of research not turning out as expected, and subtle changes were made to what was intended to be a flexible plan for case studies of LibraryThing and Goodreads and their use as boundary objects within and across existing and emergent social and information worlds.

3.2.2. LibraryThing

LibraryThing (LT) is a social digital library and web site founded in August 2005 (LibraryThing, n.d.-a), with over 1.8 million members as of June 2014 (LibraryThing, 2014). It allows users to catalog books they own, have read, or want to read (LibraryThing, n.d.-b); these serve as Functional Requirements for Bibliographic Records (FRBR) items (International Federation of Library Associations and Institutions, 2009). Users can assign tags to books, mark their favorites, and create and share collections of books with others; these collections are searchable and sortable. LT suggests books to users based on the similarity of collections. Users can provide reviews, ratings, or other metadata (termed "Common Knowledge"; LibraryThing, 2013) for editions of books (FRBR’s manifestations and expressions) and works (as in FRBR); this metadata and users’ tags are shared across the site (LibraryThing, n.d.-c). LT provides groups (administered by users or staff), which include shared library collection searching, forums, and statistics on the books collected by members of the group (LibraryThing, n.d.-d). Discussions from these forums about individual books are included on each book’s page, as are tags, ratings, and reviews. Each user has a profile page which links to their collections, tags, reviews, and ratings, and lists other user-provided information such as homepage, social networks used (Facebook, Twitter, etc.), and a short biography (LibraryThing, n.d.-c).

Examining LibraryThing in light of the definition of social digital libraries (see sections 1.1 and 2.4.3 ) shows the following:

  • LT features one or more collections of digital content collected for its users, who can be considered a community as a whole and part of many smaller communities formed by the groups feature. This content includes book data and metadata sourced from Amazon.com and libraries using the Z39.50 protocol (LibraryThing, n.d.-b); and user-contributed data, metadata, and content in many forms: tags, favorites, collections, reviews, posts in discussions, and profile information.
  • LT features services relating to the content and serving its user communities, including the ability to catalog books; create collections; discuss with others; and search for and browse books, reviews, tags, and other content.
  • LT is managed by a formal organization and company, and draws on the resources of other formal organizations (Amazon.com, libraries) and informal groupings (LT users) for providing and managing content and services.

As a large social digital library and web site, open to the public and with multiple facets, LibraryThing is well-suited as a setting and case for examining the role of digital libraries within and across communities. The existing research literature on LibraryThing has focused on its roles for social tagging and classification (e.g. Chang, 2009; Lu, Park, & Hu, 2010; Zubiaga, Körner, & Strohmaier, 2011) and in recommendation and readers’ advisory (e.g. Naughton & Lin, 2010; Stover, 2009). This study adds an additional view of the site as an online community and social digital library.

3.2.3. Goodreads

Goodreads (GR), similar to LibraryThing, is a social digital library and web site founded in January 2007 (Goodreads, 2014a). As of June 2014, it has 25 million members. Users can "recommend books" via ratings and reviews, "see which books [their] friends are reading; track the books [they are] reading, have read, and want to read; … find out if a book is a good fit for [them] from [the] community’s reviews" (para. 2); and join discussion groups "to discuss literature" (Goodreads, 2014b, para. 11). As with LibraryThing, Goodreads users can create lists of books (called "shelves"), which act as site-wide tags anyone can search on (para. 5). Searching and sorting are possible for other metadata and content types; metadata can apply to editions (manifestations or expressions) of a book or to whole works (in FRBR terms; International Federation of Library Associations and Institutions, 2009). Groups can be created, joined, and moderated by users (including Goodreads staff); they can include group shelves, discussion forums, events, photos, videos, and polling features. Users have profile pages, which may include demographic information, favorite quotes, writing samples, and events. Users who have greater than 50 books on their shelves can apply to become a Goodreads librarian , which allows them to edit and update metadata for books and authors (Goodreads, 2012d, "What can librarians do?" section). In March 2013—during the early stages of this dissertation research—Amazon.com acquired Goodreads (Chandler, 2013).

Examining GR in light of the definition of social digital libraries (see sections 1.1 and 2.4.3 ) shows the following:

  • GR features one or more collections of digital content collected for its users, who can be considered a community as a whole and part of many smaller communities formed by the groups feature. This content includes book data and metadata previously sourced from Ingram (a book wholesaler), libraries (via WorldCat and the catalogs of the American, British, and German national libraries), and publishers (Chandler, 2012), and now from Amazon since their purchase (Chandler, 2013); and user-contributed metadata and content, including shelves, lists, forum posts, events, photos, videos, polls, profile information, and book trivia.
  • GR features services relating to the content and serve its user communities, including the ability to catalog books; create shelves; discuss with others; and search for and browse books, reviews, lists, and other content.
  • GR is managed by a formal organization and company—Goodreads Inc., although now owned by Amazon—and draws on the resources of other formal organizations (Amazon, Ingram, OCLC via WorldCat, libraries, and publishers) and informal groupings (GR users, the librarians group) for providing and managing content and services.

As with LibraryThing, Goodreads is well-suited as a setting and case for examining the role of digital libraries within and across communities, because it is a large social digital library and web site that is open to the public and has multiple facets. There is little existing research literature on Goodreads, limited to its use in recommendation and readers’ advisory (e.g. Naik, 2012; Stover, 2009) and examining its impact on the practice of reading (Nakamura, 2013). This study adds an additional view of the site as an online community and social digital library.

3.3. Research Design

Use of a mixed methods research design combines qualitative and quantitative methods together to emphasize their strengths; minimize their weaknesses; improve validity, reliability, and trustworthiness; and obtain a fuller understanding of uses of social digital libraries as boundary objects within and across social and information worlds. Definitions of mixed methods research vary but core characteristics can be identified, which Creswell and Plano Clark (2011, p. 5) summarize as

  • collection and analysis of both qualitative and quantitative data;
  • integration of the two forms of data at the same time, in sequence, or in an embedded design;
  • prioritizing one or both forms of data;
  • combining methods within a single study or multiple phases of a larger research program;
  • framing the study, data collection, and analysis within philosophical, epistemological, and theoretical lenses; and
  • conducting the study according to a specific research design meting the other criteria.

This study meets all of these criteria. Qualitative and quantitative data were collected and integrated in sequence; qualitative data was prioritized, but not at the expense of quantitative data collection; multiple methods were used within this one study; and the study was based on the theoretical framework developed and the tenets of social informatics and social constructionism explained in Chapter 2 .

This study took a philosophical view of mixed methods research similar to the view of Ridenour and Newman (2008), who "reject[ed] the [standard] dichotomy" between qualitative and quantitative research methods, believing there to be an "interactive continuum" between the two (p. xi). They stated "both paradigms have their own contributions to building a knowledge base" (p. xii), suggesting a holistic approach to research design incorporating theory building and theory testing in a self-correcting cycle. Qualitative methods, Ridenour and Newman argued, should inform the research questions and purpose for quantitative phases, and vice versa; they termed this process an "interactive" one (p. xi). Research designs should come from the basis of "the research purpose and the research question" (p. 1), what "evidence [is] needed," and what epistemological stance should be taken "to address the question" (p. 18).

Greene (2007) presented a similar argument, stating "a mixed methods way of thinking actively engages with epistemological differences" (p. 27); multiple viewpoints are respected, understood, and applied within a given study. She acknowledged the tensions and contradictions that will exist in such thought, but believed this would produce the best "conversation" and allow the researcher to learn the most from their study and data (p. 27). Creswell and Plano Clark (2011) encompassed multiple viewpoints and potential designs in their chapter on choosing a mixed methods design (pp. 53–104). They considered six prototypical designs: (a) convergent parallel; (b) explanatory sequential; (c) exploratory sequential; (d) embedded; (e) transformative; and (f) multiphase.

The research design for this dissertation study is a variation on a multiphase design incorporating elements of the explanatory sequential and exploratory sequential designs of Creswell and Plano Clark. Three methods were use for data collection, following the process proposed by Ridenour and Newman (2008) and taking the approach to thought suggested by these authors, Creswell and Plano Clark (2011), and Greene (2007). The selection of this design and these methods was based on the research purpose discussed in Chapter 1 , the research questions introduced in section 3.1 , and the research setting explained in section 3.2 . The methods used were

  • content analysis of messages in LibraryThing and Goodreads groups ( section 3.4 );
  • a structured survey of LibraryThing and Goodreads users ( section 3.5 ); and
  • semi-structured qualitative interviews with users of LibraryThing and Goodreads ( section 3.6 ).

The holistic combination of these methods, interrelated in a multiphase design, has allowed for exploratory and descriptive research on social digital libraries as boundary objects incorporating the strengths of quantitative and qualitative methods and the viewpoints of multiple perspectives.

3.3.1. Integrated Design

A sequential, multiphase research design was employed for two reasons. First, each of the methods above required focus on data collection and analysis by the researcher. Trying to use a parallel or concurrent design, conducting content analysis alongside a survey or a survey alongside interviews, could have caused excess strain; a sequential design improved the chances of success, the quality of data collected and analyzed, and the significance of and level of insight in the study’s conclusions. Second, each method built on the methods before it. The design of the survey and interview instruments was influenced by ideas drawn from the literature and theories for the study and by elements of interest uncovered during the content analysis phase. The interviews focused on gathering further detail on and insight into findings from the survey results and the content analysis. This combination of methods allowed for exploring each case through content analysis, obtaining summary explanatory data through surveys, and then detailed descriptive and explanatory data through the interviews, achieving the benefits of both the exploratory and explanatory research designs presented by Creswell and Plano Clark (2011, pp. 81–90).

Creswell and Plano Clark (2011) expressed caution, noting multiphase research designs often require substantial time, effort, and multi-researcher teams. The three phases used here were not lengthy or intensive enough to cause lengthy delays in the completion of this dissertation. This is one coherent dissertation study, instead of the long-term, multi-project research program Creswell and Plano Clark cite as the prototypical multiphase design. While it was known in advance this would not be the speediest dissertation research project, using a sequential design allowed for the results from each phase to emerge as the research proceeded, instead of having to wait for all phases to complete as in a concurrent design. A complete and insightful picture of the findings and conclusions of the dissertation came within a reasonable amount of time and with a good level of effort.

3.4. Content Analysis

Content analysis has been defined as "a technique for making replicable and valid inferences from texts (or other meaningful matter) to the contexts of their use" (Krippendorff, 2004a, p. 19), with emphasis often placed on "the content of communication" (Holsti, 1969, p. 2)—specific "characteristics of messages" (p. 14)—"as the basis of inference" (p. 2). Early forms of content analysis required objectivity and highly systematic procedures (see Holsti, 1969, pp. 3–5, 14). The form of content analysis used in this study considers the meaning and understanding of content to "emerge in the process of a researcher analyzing a text relative to a particular context" (Krippendorff, 2004a, p. 19), a subjective and less rigid approach. Such text or content may have multiple, socially constructed meanings, speaking to more "than the given texts" (p. 23); they are indicative of the "contexts, discourses, or purposes" surrounding the content (p. 24).

There are at least three categories of content analysis, which Ahuvia (2001) labels traditional , interpretive , and reception-based ; other authors and researchers (e.g. Babbie, 2007, p. 325; Holsti, 1969, pp. 12–14) break content analysis into latent (subjective and qualitative) and manifest (objective and quantitative) categories of analysis. Early content analysis was purely objective and generated quantitative summaries and enumerations of manifest content, but qualitative and latent analysis have found greater acceptance over time (Ahuvia, 2001; Holsti, 1969, pp. 5–14; Krippendorff, 2004a). This study used the interpretive approach and focused coding on the latent content—the underlying meaning—of the data gathered. This section discusses the application of content analysis in the first phase of this dissertation research, including (a) the choice of the unit of analysis; (b) the population and sampling method chosen; (c) the sampling and data collection procedures followed, including a pilot test; and (d) how the data was analyzed.

3.4.1. Unit of Analysis

The unit of analysis chosen for the content analysis in this study was the message . LibraryThing’s and Goodreads’ group discussion boards are organized into threads, each of which may contain multiple individual messages. Analysis of these individual messages was aimed at uncovering indications of the roles the two digital libraries play in existing and emergent social and information worlds. Analysis began with the individual messages to ensure details and phenomena at that level were captured, but over time went beyond individual messages to the thread or group levels, since these phenomena served as instantiations of social and information worlds or as sites for interaction and translation.

3.4.2. Population and Sampling

The broader population of messages could be defined as all messages posted in public LibraryThing and Goodreads groups, but the logistics of constructing a sampling frame for such a population were and are all but impossible; it is improbable the two sites would provide data on all messages posted if it is not required of them by law. Recent messages from active groups were of most interest and use for this study. The population of messages was defined as all messages from the most active LibraryThing groups in the past week (taken from http://www.librarything.com/groups/active ) and the most recently active Goodreads groups (taken from http://www.goodreads.com/group/active ) as of April 30, 2013, the day data collection began for the content analysis phase of the study. The sampling frames were restricted to as close to but no more than 100 groups as possible, based on LibraryThing’s list claiming to list the 100 most active groups; the actual frames consisted of 91 LibraryThing groups and 93 Goodreads groups once duplicates were removed. During the planning and design of this study, Goodreads provided a list of "recently popular" groups (at http://www.goodreads.com/group/recently_popular ) that was akin to LibraryThing’s list in nature; that list was taken down sometime in early 2013 due to it causing a server slowdown (Jack & Finley, 2013). Using the most recently active groups did not guarantee consistent popularity or activity over a recent time period (such as a week), but did address the need to collect recent messages from active groups and was deemed the most acceptable source for a sampling frame still available.

To obtain a sample of messages from this population, a stratified random sampling method using the levels of group, thread, and message was employed. From the lists identified above, five groups were selected at random from each digital library (for a total of ten), but with the following inclusion and exclusion criteria applied to help ensure representativeness and allow for meaningful analysis:

(a) At least one group from each digital library with over 100 messages posted in the last week was selected. (b) At least one group from each digital library with under 100 messages posted in the last week was selected. (c) Any group with fewer than 60 messages total was removed and a new group selected. (d) Any group with fewer than two members was removed and a new group selected. (e) Any group used in the pilot study (see below) was removed and a new group selected.

Due to constraints placed on this research by Goodreads and the nature of this digital library, all group selections for Goodreads required approval from at least one group moderator per group. Prior to the collection of any data, such moderators were messaged via the site using the invitation letter found in Appendix A , section A.1.1 , and provided their consent for their group to be included in the research by agreeing to an informed consent statement (see Appendix A , section A.1.2 ). Any groups for which the moderator did not provide consent within two weeks were removed from the sample and a new group selected, using the same procedures and initial list of groups.

Two additional groups, one from LibraryThing and one from Goodreads, were used for a pilot study of the content analysis procedures, selected at random using the same procedure as above but with only criteria (c) and (d) applied. As with the main sample, the moderator for the Goodreads group selected was contacted to obtain his approval and consent prior to data collection; the moderator of the first group did not respond within two weeks, so a new group was selected. These two groups were selected in December 2012, earlier than the main sample, using the two lists of groups as they were at that time. For the pilot, threads were selected systematically and at random from the threads shown on the group’s front page (i.e. the most recent and active threads) until the total messages per group reached between 50 and 60; in both cases only one thread was selected containing 60 messages. Any thread with fewer than two messages was to be excluded from selection. All messages in the selected threads, up to the 60-message limit, were part of the sample for the pilot test, which totaled 120 messages. At 20% the size of the intended sample for the main content analysis phase, the pilot sample provided sufficient data to assess if the proposed procedures were appropriate and how long this phase of the study would take. The pilot study allowed adjustments to be made for the main content analysis phase, based on problems and difficulties observed.

For the main content analysis phase, the ten groups were selected on April 30, 2013, a later date than the two for the pilot test, using the two lists of groups as they were as of that day. A few weeks later, threads were systematically selected at random from the threads shown on each group’s front page (i.e. the most recent and active threads) until the total messages per group reaches between 50 and 60. As with the pilot, any thread with fewer than two messages was excluded from selection. No more than the first 20 messages in each thread selected were part of the sample, a change from the pilot test made to ensure at least three threads per group were selected and improve the representativeness of the sample. This was intended to lead to a total sample of between 500 and 600 messages, about half from LibraryThing and half from Goodreads. The samples in practice consisted of 286 messages from LibraryThing and 233 from Goodreads, for a total of 519 messages (see also Chapter 4 , section 4.1 ). For all random and systematic sampling in the pilot and main data collection stages, the starting point and interval was chosen by generating random numbers using Microsoft Excel’s RANDBETWEEN function.

This stratified random sampling procedure was chosen to encourage representativeness of the resulting sample while ensuring data allowing for meaningful analysis was selected. Messages, threads, or groups could be selected purposively, but such a method could result in a sample biased towards a given type of message, thread, or group. Random sampling of groups and threads from the population deemed useful for analysis produced a sample of messages from LibraryThing and Goodreads that can be judged to be quite representative, if not quite equivalent to one generated from simple random sampling since the sampling frames did not include the entire population of groups. The sizes of the sample at each stratum were chosen to balance representativeness against the time and resources necessary to complete content analysis.

3.4.3. Data Collection Procedures

Messages were collected by using a Web browser to access the LibraryThing and Goodreads web sites, following the sampling procedures discussed above. Once a thread was displayed on the screen, up to 20 messages from the thread—starting with the earliest messages—were copied and pasted into a Microsoft Word document; one such file was maintained per thread. As found in the digital libraries, each message’s author, date/time posted, and message content was saved to that file. Images or other media included were saved in their original context as best as possible. Members’ identities, as indicated by their usernames, were used to allow for identifying common message authors in a thread, for analysis of the flow of conversation, and for identifying potential participants for later phases of the study. Identities remained confidential and were not be part of further analysis, results, or publications; psuedonyms are used in this dissertation (see section 4.1 ). Avatars from Goodreads were discarded, as members’ usernames were sufficient for this purpose. These documents were stored as discussed in section 3.8 on data management.

3.4.4. Data Analysis

For analysis, the documents were imported into NVivo qualitative analysis software, version 10, running on a MacBook Pro via a virtualized Windows 7 installation. Each message was examined and codes were assigned based on its latent meaning and interpretation. The codes to be assigned drew from boundary object theory, the social worlds perspective, and the theory of information worlds, which served as an interpretive and theoretical framework for the content analysis (cf. Ahuvia, 2001). These codes were common to multiple phases of this study, and can be found in section 3.7 below. So-called "open" codes, not included in the list but judged by the researcher to be emergent in the data and relevant to the study’s purpose and research questions, could be assigned during the content analysis and coding process, as recommended by Ahuvia (2001) for interpretive content analyses and others for general qualitative data analysis (e.g. Charmaz, 2006). Findings from the data as coded and analyzed, including open codes, are included in Chapter 4 , section 4.1 .

3.4.4.1. Pilot test

These coding and analysis procedures were piloted first, using data from two of the groups, prior to their use in the main content analysis phase. Two volunteer coders, doctoral students at the FSU School of Information [1] , applied the coding scheme and procedures developed for analyzing qualitative data in this study, presented in greater detail in section 3.7 below. The researcher applied the same scheme and procedures. Measures were in place to ensure the validity, reliability, and trustworthiness of the data and analysis, as discussed in section 3.9 below. Both intercoder reliability statistics and holistic, qualitative analysis of the results were used to clarify the scheme and procedures after each round of coding. Changes that were made to procedures and the coding scheme, and issues encountered with intercoder reliability statistics, are discussed at length in section 3.7 below.

3.5. Survey

Surveys are a common research method in the social sciences, including library and information science. They allow characteristics of a population to be estimated, via statistics, through analysis of the quantified responses given to questions by a small sample of the population (Fowler, 2002; Hank, Jordan, & Wildemuth, 2009; Sapsford, 1999). Surveys consist of "a set of items, formulated as statements or questions, used to generate a response to each stated item" (Hank et al., 2009, p. 257). The data collected may describe the beliefs, opinions, attitudes, or behaviors of participants on varied topics, although most research surveys have a special purpose and focus (Fowler, 2002). This is true in the case of the survey used here, which focused on obtaining data on uses of LibraryThing and Goodreads by a sample of its users, in the specific context of their usage as boundary objects within and across social and information worlds.

The following sections cover the components of survey research methods cited by Fowler (2002, pp. 4–8) and Hank et al. (2009) as they apply to the survey used in this study. These include discussion of the unit of analysis, population, and sampling (sections 3.5.1 and 3.5.2 ); concept operationalization and survey question design (sections 3.5.3 ); pretesting and data collection ( section 3.5.4 ); and data analysis ( section 3.5.5 ). The survey was designed as a coherent whole—as recommended by Fowler (2002, p. 7)—and in relation to the content analysis and interview methods used in other phases of the study.

3.5.1. Unit of Analysis

For the survey phase of this dissertation study, the unit of analysis was the individual LibraryThing or Goodreads user . These users were—and are—understood to be members of one or more communities, social worlds, or information worlds, and to be members of or frequent one or more LibraryThing or Goodreads groups. Analysis of their responses to questions about these groups and other communities they were part of allowed for greater understanding of the roles the digital library plays for them in context of these worlds. Tentative conclusions could be made about the nine groups from which users were surveyed and about the communities associated with these groups, but generalization to LibraryThing and Goodreads as a whole was not possible, as explained in section 3.5.2 below.

3.5.2. Population and Sampling

The broader population of LibraryThing and Goodreads users totals over 26 million people, and the logistics of constructing anything resembling a sampling frame—i.e. a complete list of all users of the two sites—are all but impossible. Given the focus in the content analysis phase on nine groups (five from LibraryThing, four from Goodreads), narrowing the population to include any user who visits, frequents, or is a member of one or more of these groups made the task of sampling possible and the population compatible with the population of messages used in the content analysis phase. This narrowing of population led to a less representative population than that of all LibraryThing and Goodreads users, limiting the kinds of analysis that could be done of the survey (further details below and in Chapter 4 , section 4.2 ,).

Two sampling methods were used to select potential survey participants from this population:

  • A purposive sample, consisting of all LibraryThing users who posted a message within the five LibraryThing groups selected for the content analysis phase. The pool of messages included the messages selected for the main sample in the content analysis phase. (Goodreads did not consent to messaging of Goodreads users for this purpose, so Goodreads users were excluded from this sample.)
  • A convenience sample, consisting of all LibraryThing and Goodreads users who responded to an invitation to participate posted to each of the nine groups selected for the content analysis phase (procedures detailed in section 3.5.5 below).

All users who met the criteria (having posted a message or responded to the invitation) and human subjects requirements for age (between 18 and 65) were allowed to participate, helping to increase the responses collected and the representativeness (as best as possible) of the results obtained.

A true random sample, even from the narrower population, could not be drawn because the researcher could not generate a complete list of visitors to and members of the selected groups. Obtaining such a list from LibraryThing and Goodreads—or the group moderators, should they have access to one for their group—would have placed an unreasonable burden on the digital libraries and could have jeopardized their cooperation in and the successful completion of this study. Such a list would have violated the privacy rights of the members of these groups. A random element is included in the sampling process by using the random groups selected during the content analysis phase, but the sample still lacks much of the representativeness of a true random sample. Users could choose to participate or not and not all users of the nine groups were guaranteed to see the invitation, making it impossible to infer beyond the sample due to selection bias. One may assume survey respondents are at least moderately representative of the population of users of the nine LibraryThing and Goodreads groups, and so conclusions can be inferred about those users through nonparametric statistics. Further details are given in Chapter 4 , section 4.2 .

3.5.3. Operationalization of Concepts and Instrument Design

The phenomena of interest for the survey were similar to the phenomena of interest in the content analysis and interview phases of the study: the concepts of boundary objects, translation, coherence, information worlds, social norms, social types, information values, information behaviors or activities, social worlds, organizations, sites, and technologies. Conceptual definitions for these are found in boundary object theory, the social world perspective, the theory of information worlds, and the synthesized theoretical framework for social digital libraries (see Chapter 2 ). For the purposes of the survey and in the context of answering the research questions of this study, these concepts were operationalized through a set of Likert scaled questions (Brill, 2008; McIver & Carmines, 1981), adapted from the conceptual definitions found in the literature, theories, and synthesis thereof. These questions can be found as part of the survey instrument in Appendix B , section B.1 .

Four to six Likert items (Brill, 2008; McIver & Carmines, 1981) for each of the concepts and phenomena of interest were included in the survey. A symmetric five-point scale was used for each item, as is traditional for Likert items (Brill, 2008); five response choices provides for higher levels of reliability without offering respondents too many choices (Brill, 2008), and questions can be re-scaled without significant loss of statistical validity (Dawes, 2008). Each item used the following labels for response choices: Strongly Agree(5), Agree, Neutral, Disagree, and Strongly Disagree(1). In analysis, each of the items was assigned a numeric rating (5–1) and summed to form Likert scales for each phenomenon (Brill, 2008; McIver & Carmines, 1981). Statistical analysis checked the internal consistency and reliability of each scale, with items dropped that contributed to lower levels of reliability (see sections 3.5.5 and 3.9 below, and Chapter 4 , section 4.2.1 ). Using at least four items per scale allowed for appropriate statistical analysis to proceed.

Questions were developed, based on the literature and theoretical framework reviewed in Chapter 2 , to measure each of the phenomena of interest. Hank et al. (2009, pp. 257–258) provided a list of suggestions for constructing survey instruments and writing questions: ensure questions are answerable, stated in complete sentences, use neutral and unbiased language, are at an appropriate level of specificity, and are not double-barreled. They suggested participants should not be forced to answer any one question. Fowler (2002, pp. 76–103) included a chapter on designing questions that are good measures in his book on survey research methods. He cautioned researchers to be careful questions are worded adequately; mean the same to and can be understood by all respondents; can be answered given the respondents’ knowledge and memory; and do not make respondents feel uncomfortable and desire not to give a true, accurate answer. According to Fowler, researchers should not ask two questions at once. Sapsford (1999, pp. 119–122) agreed and suggested care should be taken to ensure questions are precise, lack ambiguity, and are easy to understand and in colloquial language. The questions developed for the survey in this study, found in Appendix B , section B.1 , were developed by the researcher and reviewed by the researcher and his supervisory committee in light of this advice.

An additional set of demographic and usage questions was part of the survey instrument, in a separate section at the end as recommended by Peterson (2000, as cited in Hank et al., 2009, p. 258). These questions allowed for collection of data on other variables of potential relevance to and having possible impact on the phenomena of interest, including use of the Internet, LibraryThing and Goodreads, the groups feature of the sites, and other social media and social networking web sites; and demographic factors such as age and gender. These demographic questions can be found in Appendix B , section B.1 .

3.5.4. Data Collection Procedures

3.5.4.1. pretest.

The first stage of data collection was to pretest the survey instrument to help ensure its reliability and validity (Hank et al., 2009, p. 259). A convenience sample of graduate students and graduate alumni of Florida State University was invited to pretest the survey and answer a few short, open-ended questions about their experience. Recruitment took place via face-to-face discussion, e-mail, and Facebook messages. All pretesters came from the School of Information; initial attempts were made to have this sample represent multiple departments from the university, but no students from other departments contacted (Business and Communication) volunteered. Flyers were posted later in the pretest period and the survey opened up via a direct link, to see if undergraduate or graduate students from other departments would be interested, but no responses were received through the link. One School of Information faculty member did volunteer his time to pretest the survey, and his input was welcomed alongside the students. Minor changes were made as a result, reducing the number of questions slightly to reduce perceived repetitiveness and clarifying other questions that pretesters reported getting stuck on. The pretest helped confirm the length of time for completion of the survey.

3.5.4.2. Main survey

The second stage of data collection was to select the samples discussed in section 3.5.2 and send invitations to participate to them. A couple of weeks before this began, the researcher contacted LibraryThing and the moderators of each Goodreads group to inform them of the beginning of the survey. A staff member from LibraryThing posted a short message in each group to let users know that the research would be taking place and had been given LibraryThing’s approval, to ensure invitations were not seen as spam. (LibraryThing required this step as part of their approval of the research; see Appendix E , section E.1 .) Goodreads moderators were welcome to inform their groups of the upcoming research.

The purposive sample was drawn from LibraryThing users who posted messages collected during the content analysis phase. Each of these users was sent an invitation letter, included in Appendix A , section A.2.1.1 . The private message features of LibraryThing were used to send the invitations to the selected users; while LibraryThing users can include an e-mail address in their profile, not all did so. Reminder invitation letters ( Appendix A , section A.2.1.2 ) were re-sent two weeks and four weeks after the beginning of data collection to remind individuals who had not completed the survey and thanking users who had. The convenience sample was drawn by posting an invitation, included in Appendix A section A.2.2 , to each of the LibraryThing and Goodreads groups selected during the content analysis phase. This invitation was re-posted to the same groups two weeks and four weeks after the beginning of data collection, to help ensure as many group members and visitors as possible saw it and had a chance to respond. Permission was granted by LibraryThing and Goodreads staff for this method of data collection (see Appendix E , sections E.1 and E.2 ).

Participants were given a total of six weeks to complete the survey from August 26th, 2013, the date data collection first began for this phase of the study. The survey was expected to take users about 15 to 20 minutes, an estimate confirmed by the pretesters—with more subject knowledge—taking between 7 and 16 minutes. The reminders at two and four weeks, number of visitors to and members of the nine groups, and number of users directly invited on LibraryThing led to sufficient data for analysis (see Chapter 4 , section 4.2 ), although snowball sampling and other techniques were held in reserve in case they were necessary.

3.5.4.3. Compensation

To encourage participation, compensation was offered in the form of a drawing for one of ten $25 Amazon.com, Barnes and Noble, or Books-A-Million gift cards. These stores were selected since they include the most popular online bookstore—Amazon.com, who after this selection was made acquired Goodreads—and the two most popular brick-and-mortar bookstores (which also have an online presence). Participants were given a choice of which store they would prefer, increasing the potential usefulness of the gift card to them and reducing potential bias created by supporting only one store. Other bookstores are smaller, do not offer online gift cards, or have few locations; offering gift cards from every possible store would present logistical challenges. The e-mail addresses of all participants who completed the survey and included an e-mail address in their response were entered into a Microsoft Excel spreadsheet (maintained under the data management procedures detailed in section 3.8 ). Gift card codes were e-mailed to 10 random e-mail addresses—selected by using Excel’s RANDBETWEEN function to generate 10 random numbers between 1 and the number of users who took the survey, then selecting those users from the spreadsheet—for the store they selected as preferred; these were sent on November 9 th , about one month after the survey was closed. Funds for the gift cards came from a Beta Phi Mu Eugene Garfield Doctoral Dissertation Fellowship, which I acknowledge and am thankful for.

3.5.4.4. Online hosting

The survey instrument was hosted online using Qualtrics online survey software, made available by FSU to all students and faculty. An online, Internet-based survey provided the greatest chance of reaching users of LibraryThing and Goodreads in the context of their use of the site and their interactions with other users. It cost less—survey hosting for a questionnaire of any length is provided free by Qualtrics in association with FSU—and took less time than a self-administered paper survey was expected to, while providing for honest answers and requiring less direct researcher involvement compared with an administered paper or telephone survey (Fowler, 2002, pp. 71–74). Participants completed the survey by following a link in the invitation letters; two separate links were used for users of LibraryThing and Goodreads, so that the survey could be personalized to refer to each digital library by name.

3.5.4.5. Consent and follow-up

The first page of the survey included an informed consent statement, included in Appendix A , section A.2.3 , which participants had to agree to before they could begin answering the survey questions. As seen by the last few questions in Appendix B section B.1 , participants were asked for their e-mail address for purposes of compensation, if they were interested in participating in a follow-up interview, and if they desired a report of the findings of the research once the study was complete. These e-mail addresses are being kept confidential and are stored in a secure, password-protected encrypted volume, the password known to the researcher but no one else. Details of data management are discussed in section 3.8 .

3.5.5. Data Analysis

The survey results were analyzed using SPSS statistical analysis software running on Windows, accessed through a virtual lab environment supported by FSU. First, the Likert scales were analyzed to determine the internal consistency and reliability of the scales via Cronbach’s alpha, following the procedures related by George and Mallery (2010). Individual items were dropped from a scale if their removal would increase the Cronbach’s alpha (and the reliability) of the overall scale. This procedure and its results are detailed in Chapter 4 , section 4.2.1 . The average of the remaining items in the scale was then taken, resulting in one value ranging from one to five for each of the concepts being measured. Combined with the demographic variables collected in the second half of the study, these were analyzed using appropriate, mostly nonparametric statistics including chi-square analysis, Mann-Whitney U tests, median tests, Kruskal-Wallis tests, Wilcoxon signed rank tests, and Kendall’s τ correlations (see Chapter 4 , section 4.2 for details).

3.6. Interviews

Qualitative interviewing, used in the third phase of this study, is a descriptive and interpretive research method that seeks meaning (Kvale & Brinkmann, 2009). While interviewers may seek basic facts, explanations, and statistics, nuanced explorations and descriptions of phenomena are of core interest. Interviews in qualitative and mixed-methods research projects are used "to understand themes of the lived daily world from the [participants’] own perspectives" (p. 24), through researcher interpretation of "the meaning of the described phenomena" (p. 27). Interviews for research purposes are often seen as a form of "professional conversation" (p. 2; see also Lincoln & Guba, 1985a, p. 268; Sutton, 2010, p. 4388) between the interviewer and the interviewee, on given themes introduced by the interviewer but assumed to be of mutual interest to the interviewee. The two "act in relation to each other and reciprocally influence each other" (Kvale & Brinkmann, 2009, p. 32). Interviewees choose specific instances, examples, or areas within the chosen theme(s) to discuss with the interviewer.

Interviews serve as a source of data on phenomena from the past, present, or (potential) future of interviewees, including "persons, events, activities, organizations, feelings, motivations, claims, concerns, … other entities" (Lincoln & Guba, 1985a, p. 268), and the complex interrelations between all of these. Interviews can help to verify ("member check"), extend, and triangulate data and information already obtained via other methods (Creswell & Plano Clark, 2011; Lincoln & Guba, 1985a). They allow for the gathering of research data when the researcher or his/her colleagues cannot conduct an ethnographic participant observation due to time, location, language, or other constraints (Sutton, 2010).

This dissertation study used semi-structured qualitative interviews employing the critical incident technique (Fisher & Oulton, 1999; Flanagan, 1954; Woolsey, 1986) to explore and describe the phenomena surrounding the roles of LibraryThing and Goodreads, as boundary objects, within and across social and information worlds. Interviews helped find nuances and details that were not possible to determine through the survey questionnaire and were missed, glossed over, or not observable during content analysis. The following sections discuss the strengths of interviews for this study, the chosen unit of analysis, population and sampling procedures, design of the interview instrument, procedures used for conducting the interviews, and data analysis.

3.6.1. Strengths of Interviews

The strengths of qualitative interviews are a good fit with the framework and perspective taken in this dissertation. These strengths are evidenced by many of the studies of social digital libraries reviewed in Chapter 2 using interviews (Bishop, 1999; Bishop et al., 2000; Chu, 2008; Farrell et al., 2009; Marchionini et al., 2003; Star et al., 2003; Van House, 2003; You, 2010) and the frequent use of interviews in studies of social and information worlds and of boundary objects (see Burnett, Burnett, et al., 2009; Burnett, Subramaniam, et al., 2009; Chatman, 1992; Clarke & Star, 2008; Gal et al., 2004; Gibson, 2011, 2013; Kazmer & Haythornthwaite, 2001). Thick, nuanced description of meanings, close to users’ thoughts (Forsythe, 2001; Geertz, 1973; Kvale & Brinkmann, 2009), was intended to help expose the social construction of these meanings and of the phenomena of social and information worlds, which happened (see Chapter 4 , section 4.3 ). Since true ethnographic observation would be difficult to arrange and could miss the social elements of interest, qualitative interviews were the best choice for returning rich, descriptive data on participants’ social and information worlds and the roles LibraryThing and Goodreads play in them. The qualitative interviewing literature states that its flexibility as a technique addresses the different contexts interviewees—with varying interests and backgrounds—come from, allowing the interviewer to adjust (Kvale & Brinkmann, 2009; Westbrook, 1997); this was true in practice in this case. The development of rapport can build opportunities for future follow-up, longitudinal research with the same participants, exploring the results of this study in greater detail (Westbrook, 1997). The understanding of participants of the roles of LibraryThing and Goodreads in the social and information worlds they are part of is at the core of this study, and the obtaining of descriptions and perspectives of participants’ "lived worlds" and their "understanding of the meanings in their lived world" was an appropriate use of interviews and played to their strengths (Kvale & Brinkmann, 2009, p. 116).

3.6.2. Unit of Analysis

The unit of analysis chosen for the interview phase of the study was the individual user of LibraryThing or Goodreads. These users were understood, as in the survey phase, to be part of one or more social or information worlds, and their participation in and responses to the interview informed analysis of the roles of LibraryThing and Goodreads in their experiences, in these existing worlds, and in the potential emergence of new worlds. As discussed above and in Chapter 2 , while individuals were interviewed the theoretical framework underlying this proposed study allowed for multi-leveled analysis, taking advantage of the strengths of interviews over other methods while minimizing their weaknesses.

3.6.3. Population and Sampling

The broader population of LibraryThing and Goodreads users totals over 26 million people; as with the survey phase of the study, sampling from this large population would present major logistical challenges. Given the existing sample of users selected to take the survey, restricting the sample of potential interview participants to this subgroup of the population—a ready-made sampling frame—provides a manageable task, if perhaps not anything approaching a true random sample. This method of sampling is appropriate in this case since data is available from the survey about these users, their social and information worlds, and the roles LibraryThing and Goodreads may play in them, leading to more insightful interview data.

The interview phase used purposive sampling of users whose survey responses indicated they could provide insightful data on the roles of LibraryThing and Goodreads in existing and emergent social and information worlds. Determination of this indication was done by looking at the content analysis and survey findings and prioritizing which scores on which variables were most of interest. Users who indicated they would be willing to participate in follow-up research served as the sampling frame, from which participants were sampled and chosen with an eye towards obtaining thick description (Geertz, 1973) of the picture of the phenomena under study, given other constraints such as time and availability. As interviews continued towards saturation, these criteria were reviewed and revised, and ensuring that interviewees were at least moderately representative of the group of survey participants became a concern. True and complete representativeness is not necessary when using qualitative interviewing, but saturation of findings is a necessary requirement (Bauer & Aarts, 2000; Gaskell & Bauer, 2000; Westbrook, 1997), and so sampling continued "until further exemplars"—interviewees in this study—"fail[ed] to add new nuances or to contradict what is understood" from the existing collected data (Westbrook, 1997, p. 147). This sampling method was chosen to obtain data to answer the research questions—from the interviews and in combination with findings from the other two methods—and to provide an accurate representation of LibraryThing and Goodreads in the context of the communities of users from the nine groups selected at the beginning of the content analysis phase.

Participants who were selected due to expectations they would provide insightful data through an interview were invited to take part via the e-mail addresses they provided when confirming their willingness to participate in an interview. The letter prospective interviewees were sent is in Appendix A , section A.3.1 . An initial sample of six prospective interviewees—three from each digital library—was e-mailed at first, to allow interviews to be arranged within a week or two of the contact date and not be forgotten about by participants if scheduled too far in advance. Further prospective participants were invited every week or two thereafter, when necessary to increase the sample size. If and when selected users did not respond to the initial request, a second request was made one to two weeks later, except in the cases at the end of the interview data collection when saturation had been reached. New users replaced the original ones in the sample if the latter did not respond after two to three weeks.

3.6.3.1. Pretest

Prior to collection of actual interview data, the interview instrument and procedures (as discussed in the next two sections) were pretested with an additional convenience sample of two FSU School of Information alumni and one FSU School of Information faculty member who helped pretest the survey. The procedures for this were identical to the procedures discussed below for the main interview phase. Pretesting allowed for potential refinement of the instrument and procedures, ensuring questions are understandable by a broader population, and making any necessary adjustments to the sampling method for the main interviewing process. No transcriptions or data analysis from this pretest took place, and audio recordings that were made to test procedures were only used to refine the interview instrument and procedures; they were deleted once the main interviews began. No specific changes were made to the instrument, although the potential need for additional prompting in association with a few questions was observed; quirks and foibles of the recording software were discovered, leading to tighter and more careful following of recording steps for the main set of interviews.

3.6.4. Instrument Design

The interviews were semi-structured; they used an instrument as a guide, but were treated as a conversation guided by the interviewer’s questions and the interviewees’ personal responses and reflections (Kvale & Brinkmann, 2009; Lincoln & Guba, 1985a). The instrument, included in Appendix C, provided pre-planned questions and themes, but additional follow-up questions and prompts not included in the instrument emerged from the conversation and its natural progression. This allowed key themes related to the research questions to be discussed and focused on without restricting the interview to no more than a given set of questions in advance (cf. Suchman & Jordan, 1990).

Key themes explored in the interviews included

  • participants’ use of LibraryThing or Goodreads, focusing on use as a boundary object;
  • the social and information worlds of participants, and their relationship to LibraryThing or Goodreads;
  • the characteristics of these social and information worlds—their social norms, social types, information values, information behaviors, activities, organizations, sites, and technologies—and their impact on the user and their use of LibraryThing or Goodreads;
  • translation between, coherence across, and convergence of social and information worlds, via LibraryThing or Goodreads; and
  • the emergence of new social or information worlds through translation, convergence, or related activities and behaviors of LibraryThing or Goodreads users.

Focusing on critical incidents (Fisher & Oulton, 1999; Flanagan, 1954; Woolsey, 1986) of times when users interacted with others using the LibraryThing or Goodreads digital libraries helped provide a rich environment and context for exploration of these themes in detail with each interviewee. Among the interviews the degree of focus by individuals on the critical incident versus the broader spectrum of their use varied, but this was accepted as a natural, emergent element of the interviews, and follow-up questions and prompts were used to ensure sufficient data was elucidated on the incidents. The questions included in the instrument and in prompts and follow-ups used drew from the advice set down by Kvale and Brinkmann (2009, pp. 130–140) in their discussion of scripting interviews and types of interview questions, including

  • introducing themes before asking detailed questions;
  • focusing on descriptions of what occurred and how during critical incidents, instead of why it happened (at least to begin with);
  • following up on responses as appropriate;
  • seeking projection of interviewees’ opinions or the opinions of others in their social and information worlds; and
  • checking the researcher’s interpretation of previous findings and interview responses.

3.6.5. Data Collection Procedures

As mentioned above, prior to collection of actual interview data the interview instrument and procedures was pretested with two FSU iSchool graduate alumni and one FSU iSchool faculty member.

3.6.5.1. Preparation and recording

After participants agreed to be interviewed by replying to the invitation discussed in section 3.6.3 , a specific date and time was arranged for the interview to take place. Since no participants were at locations close to Tallahassee (and few were expected to be), face-to-face interviews would have been difficult to accomplish. For this reason, it was planned that interviews would take place using online audiovisual media, as popular in studies of "Internet-based activity … where the research participants are already comfortable with online interactions" (Kazmer & Xie, 2008, pp. 257–258). Interviewees were offered a choice of Skype (skype.com), Google Hangouts (accessible via plus.google.com), Apple FaceTime (apple.com), or telephone. Interviews were audio recorded, with interviewee permission; GarageBand (apple.com/ilife/garageband) and Soundflower (cycling74.com/products/soundflower) software were used to record Skype and Apple FaceTime calls, while telephone calls were recorded via Google Hangouts, Google Voice (voice.google.com), GarageBand, and Soundflower software. No users chose Google Hangouts, and more than expected chose telephone calls; while online audiovisual media were the intended plan, interviewees’ preferences were attended to, and this did not cause any major issues with collecting interview data.

The interviewer took any notes he felt necessary on his impressions of the interview as soon as the interview has concluded, to not distract the interviewee with note taking but help ensure an accurate capturing of the interview process. Most interviews took between 40 and 55 minutes; full details are given in Chapter 4 , section 4.3 . These interview procedures allowed for a level of data equivalent to or greater than face-to-face interviews to be gathered, eliminating any potential weaknesses from a non-traditional interview setting while maintaining the strengths of synchronous interviews (Kazmer & Xie, 2008).

3.6.5.2. Introduction and informed consent

The interview process began with introductions, thanking the interviewees for participating, explaining the logistics of the interview, and ensuring that informed consent was obtained. Since obtaining written consent in person was not possible, participants were e-mailed a link to a page (the content for which is shown in Appendix A , section A.3.2 ) requesting their consent for the interviews, including the interview informed consent form, a couple of days before the interview. (This used the same FSU-partnered Qualtrics system as for the survey.) I requested interviewees to review this page and ask any questions they had. Before the interview recording began, consenting participants clicked an "I consent" button at the bottom of the page; some did this before audio or video contact was made, others waited until I directed them there just before the interview began. I then reviewed "the nature and purpose of the interview" with the interviewee, to ensure they knew the overall theme and topic of discussion (Lincoln & Guba, 1985a, p. 270). Prior to the critical incident portion of the interview, I asked a general, "grand tour"-type question (with follow-up prompts as necessary) to explore participants’ use of LibraryThing or Goodreads, the reasons for this use, and the groups they participate in.

3.6.5.3. Critical incident technique

The biggest portion of the interview employed the critical incident technique, a flexible interviewing technique intended to obtain "certain important facts concerning behavior in defined situations" (Flanagan, 1954, p. 335). First developed for use in aviation psychology, it has become a popular interviewing technique in the social sciences, education, and business, including LIS (Butterfield, Borgen, Amundson, & Maglio, 2005; Fisher & Oulton, 1999; Urquhart et al., 2003; Woolsey, 1986). It is often used in exploratory research to build theories, models, or frameworks for later testing and refinement, as typified by Savolainen’s (1995) research establishing his Everyday Life Information Seeking (ELIS) model. Flanagan (1954) outlined five main stages in the technique. The first two stages are to provide further operational definitions and structure for interviews, which have been discussed in the sections above. The fourth and fifth, procedures for analysis and interpretation of data gathered from interviews, are discussed in sections 3.6.6 and 3.7 below.

The third stage is the actual collection of a critical incident from each interviewee. In a critical incident interview, after initial introductions and formalities, the interviewer asks the interviewee to recall an incident where given situation(s) or behavior(s) occurred, as defined during the previous stages. Per Flanagan (1954), these incidents should be recent enough to ensure participants have not forgotten the details of them. Specific language is used to get interviewees to think of such an incident. In this study, the following language was used, with slight changes incorporated in the context of a given interview:

Now I’d like you to think of a time within the past few weeks where you interacted with others, either people you already knew or people you did not know, while using [LibraryThing / Goodreads]. (Pause until such an incident is in mind, or gently prompt the interviewee if they have trouble recollecting one.) Could you tell me about this interaction and how it came about?

This initial question allowed interviewees to refresh their memory of the incident by going over it in their mind, and provided data on their overall impressions of the interaction and how it came about. After this initial discussion, I guided the conversation with gentle prompts and follow-up questions designed to steer the conversation about the incident to the themes mentioned in section 3.6.4 above. Main questions were included in the interview instrument (see Appendix C ); prompts were not. All questions and prompts were aimed at eliciting "the beliefs, opinions, … suggestions … thoughts, feelings, and [reasons] why participants behaved" that way during their interaction (Butterfield et al., 2005, p. 490), in the context of LibraryThing or Goodreads and the social and information worlds at play in the incident.

3.6.5.4. Finishing up

Once the critical incident had been explored at length, the interview concluded with final questions intended to help validate and generalize the findings obtained from the critical incident portion of the interview, a process often called "member checking" (Lincoln & Guba, 1985a). I gave an overall impression of the role or roles I felt LibraryThing or Goodreads played in the incident and in the interviewee’s overall use of the site, and would ask if the impression seemed correct to the interviewee or—if they responded before I could get to that part—engaged them in further reflective conversation. Interviews confirmed if the incidents participants shared matched their overall experiences. The interview concluded by me thanking interviewees for their time and participation, and answering any questions they had (as a couple did about where the research was going or when they would hear about the overall findings). As mentioned above, as soon as the interview was over I took time to write up any notes I felt were necessary, to capture any elements of the experience that risked being lost due to fading memory. Interviewees were then thanked again for their participation and help via e-mail follow-ups a few days to a week later.

3.6.6. Data Analysis

All interview audio was transcribed by the researcher, who used Audacity software (audacity.sourceforge.net) to play back the interview and Microsoft Word to enter the transcription. Parts found to be difficult to understand could be slowed down or amplified in volume using the built-in features of the Audacity software; it provided noise reduction features that were helpful for one or two interview recordings. Any notes taken not already in digital form were transcribed. All notes, audio, and transcriptions were stored as discussed in section 3.8 .

Data analysis proceeded in a similar fashion to the content analysis phase of the study. Transcripts and notes were imported into NVivo 10 qualitative analysis software, which was used to look over each file and assign codes to sentences and passages. As with the earlier qualitative method, the codes assigned draw from boundary object theory, the social worlds perspective, and the theory of information worlds, which served as an interpretive and theoretical framework for analyzing the meaning of interview responses. They can be found in section 3.7 below. Open codes not included in the list but judged to be emergent in the data and relevant to the study’s purpose and research questions could be assigned during the coding process, as recommended by Charmaz (2006) and Kvale and Brinkmann (2009, p. 202), among others; these codes included open codes from the content analysis phase. Measures to ensure the trustworthiness of the data and analysis were taken as discussed in section 3.9 .

3.7. Qualitative Data Analysis

All qualitative data—consisting of the messages collected for the content analysis and transcripts and notes from the interviews—were imported into NVivo 10 qualitative analysis software, which was used to look over each transcript and assign codes.

For analysis, an approach similar to grounded theory (Charmaz, 2006; Strauss & Corbin, 1994) and its constant comparative method was taken, but without the same focus on open coding. Codes were first applied to sentences in messages or in participants’ interview responses (as transcribed). Only the lowest, most detailed level of codes, as presented in the codebook (as 3.7.2 and 3.7.3 below), were applied. Two exceptions to sentence-level coding were allowed. For the content analysis phase, no more than two codes could be applied to an entire message if there was clear evidence for them throughout the message. For the interview phase, no more than two codes could be applied to a paragraph, answer to a question, or short exchange (no more than half a page) if there was clear evidence for them throughout the paragraph, answer, or exchange. No other exceptions were allowed to this rule; codes could not be applied to units smaller than sentences (to provide sufficient context), and were required to be applied individually to multiple messages, answers, or exchanges. Memos and annotations were made to explain any cases where code(s) were applied across multiple sentences within a message or interview transcript at once, and to explain codes in greater detail where deemed necessary; a general rule of "if in doubt, add an annotation" was followed throughout analysis. These rules were refined and clarified after initial pilot testing, details of which are given in section 3.7.1 below.

After initial analysis, higher levels of analysis looked at the coding in the context of paragraphs, entire messages, message threads, and larger portions of interview transcripts, considering these in light of other threads, messages, and interviews. Throughout the coding and analysis process, consideration of the social and information worlds was explicitly multi-leveled: worlds of multiple sizes, shapes, and types were considered throughout the processes of collecting and analyzing data. The boundaries of these worlds, and where these worlds fell on the continuum of existing and emergent worlds, was considered emergent from the data, based on the conceptual, theoretical, and operational definitions given in earlier sections and in the coding scheme below. Memos and annotations were provided to explain the levels of social and information worlds under consideration, especially when boundary-related codes were applied.

The search, query, and report features of NVivo were used in further analysis and the writing of sections 4.1 and 4.3 of Chapter 4 . While messages and individual interviews (as the units of analysis) and sentences within them were coded as individual units, higher level units—passages, threads, groups, social and information worlds, and LibraryThing and Goodreads—were considered as the analysis proceeded. This allowed findings and conclusions to be drawn at multiple levels, as can be seen in Chapters 4 and 5 .

3.7.1. Pilot Testing and Resulting Changes

Pilot testing of the coding scheme and analysis procedures was conducted prior to the content analysis phase. Two fellow FSU iSchool doctoral students, having basic familiarity with the theories incorporated into the theoretical framework used here, were recruited to test intercoder reliability. Each student volunteer was provided with a "quick reference" version of the coding scheme in sections 3.7.2 and 3.7.3 below, with the final version used by the researcher as a guide for analysis included in Appendix D . Pilot test coders were given a summary of the coding rules and guidelines discussed herein. The second volunteer discussed the coding scheme, rules, and guidelines at some length with the researcher—including some brief practice coding—before coding began, and both volunteers took part in debriefing sessions with the researcher after coding had been completed. The researcher and the first volunteer coded the messages selected for the pilot test of the content analysis phase—120 messages, 60 each from one LibraryThing and Goodreads group. Changes were made after this coding cycle based on intercoder reliability statistics—using Cohen’s (1960) kappa as calculated by NVivo—and qualitative and holistic analysis of the results, and a second cycle proceeded. Further changes were made after this second cycle.

Changes were made to address weaknesses identified in the original procedures, coding scheme, and theoretical framework, to help ensure theoretical and operational clarity. Changes made after the first cycle were as follows:

Codes were only to be applied at the sentence level, with two exceptions as mentioned earlier.

Memos and annotations were stressed, especially to explain codes applied at levels higher than the sentence level and to explain coding in greater detail where deemed necessary.

Boundaries of worlds were to be considered emergent from the data, with memos and annotations recommended to explain the level of social and information worlds under consideration.

Definitions for all concepts were refined and tightened.

Cases where social norms or information value had broad application, across substantial parts of a thread or interview, were to be memoed or annotated instead of coded, since the latter was seen to be of less use for later analysis.

Information behavior was tightened, to consider only behavior that was normative at some level and to exclude general occurrences of information behavior, since under the latter interpretation whole threads and interviews could be coded.

If it was unclear whether a new world—of any size or scale—had truly emerged, memos and annotations were recommended to express the degree of confidence.

Three subcodes were added to account for different cases of LibraryThing or Goodreads acting as a standard boundary object: as an emergent site, an emergent technology / ICT, or another type of emergent boundary object.

Changes were made after the second cycle of coding and discussion among the researcher and multiple committee members, as follows:

The distinction between existing and emergent was stressed to be along a continuum, and to be a phenomenon that would emerge from the research data, similar to the size and shape of the worlds and their boundaries. Memos and annotations were further stressed to elaborate on where given cases fall on this continuum.

Codes and procedures were acknowledged to be complex, and to be using theories that had not been combined in previous research; the theoretical framework is emergent. As such, intercoder reliability statistics—as run using Cohen’s (1960) kappa after each coding cycle of the pilot test and initially planned for a portion of the interview data—were considered a less appropriate measure of the potential trustworthiness, credibility, transferability, dependability, and confirmability of the findings than originally thought. Both pilot tests showed that reaching high statistical levels of intercoder reliability would require extensive training of other coders—difficult if not impossible in dissertation research—and much fine-tuning of rules and procedures, fine-tuning that does not fit the interpretive and social constructionist paradigms in use for this research. Other techniques for ensuring qualitative trustworthiness (Gaskell & Bauer, 2000; Lincoln & Guba, 1985), already built into the study (see section 3.9.3 ), would now be emphasized alongside intracoder reliability checking at the conclusion of the study; results of the latter are included in Chapter 4 .

The following sections present the coding scheme used for each research question, as revised after the pilot testing. Section 3.7.2 includes the codes focusing on existing social and information worlds (RQ1), while section 3.7.3 includes the codes focusing on emergent social and information worlds (RQ2). The distinction between existing and emergent was treated as along a continuum, where the degree to which a world is existing or emergent was allowed to emerge from the research data. Frequent memos and annotations were made on this during analysis. An operational definition is given for the concept each code represents, as used in the coding and analysis of data from the content analysis and interviews phases. These definitions come from the literature review presented in Chapter 2 and the theories and theoretical framework described therein, with contributions from definitions in the Oxford English Dictionary’s online version (oed.com) where necessary and appropriate. A summarized version of the coding scheme, used as a quick reference during coding and analysis, is included as Appendix D.

3.7.2. Existing Worlds

3.7.2.1. translation.

Star and Griesemer (1989) defined translation as "the task of reconciling [the] meanings" of objects, methods, and concepts across social worlds (p. 388) so people can "work together" (p. 389). Multiple translations, gatekeepers, or "passage points" can exist between different social worlds (p. 390). This was operationalized as the process of reconciliation and translation of meanings—taken to include understandings—between different people, social worlds, or information worlds.

3.7.2.2. Coherence

While Star and Griesemer (1989) never gave coherence an explicit, glossary-style definition, it can be conceptualized as the degree of consistency between different translations and social or information worlds. Boundary objects play a critical role "in developing and maintaining coherence across intersecting social worlds" (p. 393). Coherence was operationalized using the common characteristics of social and information worlds, coded under the definitions given below. Coding took place at the level of these characteristics, not for coherence in general.

Social norms : Burnett, Besant, and Chatman (2001, p. 537) defined social norms as the "standards of ‘rightness’ and ‘wrongness’ in social appearances" that apply in an information world. Jaeger and Burnett (2010, p. 22) restated this as "a world’s shared sense of the appropriateness—the rightness or *wrongness—*of social appearances and observable behaviors." Drawing from these, social norms were operationally defined as the common standards and sense of appropriate (right or wrong) behaviors, activities, and social appearances in an information world. In some cases, a substantial part of or an entire thread or interview could be seen as socially normative, but it was decided that in those cases the social norms code would not be applied to every message or sentence, as doing so would not be of much use for later analysis. Instead, a memo or annotation was made to note and discuss the application of social norms to large parts of a thread or interview.

Social types : Burnett et al. (2001, p. 537) defined social types as "the [social] classification of a person." Jaeger and Burnett (2010, p. 22) elaborated on this, stating social types are "the ways in which individuals are perceived and defined within the context of their [information] world." This was operationalized following the latter definition and to include explicit and implicit roles, status, and hierarchy.

Information value : Jaeger and Burnett (2010, p. 35) defined information value as "a shared sense of a relative scale of the importance of information, of whether particular kinds of information are worth one’s attention or not." Such values may include, but are not limited to, "emotional, spiritual, cultural, political, or economic value—or some combination" (p. 35). Values may be explicit and acknowledged, or implicit within message content or interview responses. A succinct operational definition, used in this study for coding, is that information value is a shared sense, explicit or implicit, of the relative scale of the importance—emotionally, spiritually, culturally, politically, and/or economically—of information and whether it is worth attention. As with social norms, if a substantial part of or an entire thread or interview was seen as expressing the shared information values of a world, the code was not applied to every message or sentence; instead a memo or annotation was used.

Information behavior and activities : Burnett and Jaeger (2008, "Small worlds" section, para. 8) defined information behavior as "the full spectrum of normative [information] behavior … available to members of a … world"; this was restated in different words by Jaeger and Burnett (2010, p. 23). Information behavior can include seeking, searching, sharing, or use of data, information, or knowledge; communication and interaction; and avoidance of data, information, or knowledge. Strauss (1978) did not provide an explicit definition of activities, but his use of the word within the social worlds perspective corresponds with one of its senses in the Oxford English Dictionary: "something which a person, animal, or group chooses to do; an occupation, a pursuit" ("Activity," 2012). A slight restriction was placed on this operationally, that the "something" should have an informational component (with information construed to include data and knowledge). Operationally, this code was used to identify occurrences of normative, chosen information behavior and information-based occupations or pursuits—defined broadly—by members of a world. Such behavior had to be normative at some level to be coded, and general occurrence of information behavior were not coded, since under such an interpretation whole threads and interviews could be construed as such.

Organizations : Strauss (1978) stated social worlds may have "temporary divisions of labor" at first, but "organizations inevitably evolve to further one aspect or another of the world’s activities." This sense is similar to the definition of an organization as "an organized body of people with a particular purpose" found in the Oxford English Dictionary ("Organization," 2012). A combination of the two was used for operational coding: organizations are organized, but possibly temporary bodies with the particular purpose of furthering one aspect or another of the world’s activities.

3.7.2.3. Boundary object

Codes were applied for treatment of the digital library as a boundary object. This was operationalized by coding passages where the digital libraries cross the boundaries between multiple existing social or information worlds and are used within and adapted to many of them "simultaneously" (Star & Griesemer, 1989, p. 408) while "maintain[ing] a common identity across sites" (Star, 1989, p. 46). Instances of the boundary object’s use as a common site and information and communication technology (ICT) were coded using the definitions below. Coding took place at the level of these characteristics, not for boundary objects in general.

Common site : Strauss (1978) related sites to "space and shaped landscape"; the term’s use under the social worlds perspective corresponds to this sense given in the Oxford English Dictionary: "a position or location in or on something, esp. one where some activity happens or is done" ("Site," 2012). This location may be a physical, virtual, or metaphorical space, as seen in many of the concepts of community reviewed in Section 2.2. A succinct operational definition, used for coding, is that sites are spaces, positions, or locations—physical, virtual, or metaphorical—where information-related activities and behaviors take place.

Common information and communication technologies (ICTs) : Strauss (1978) defined technology as "inherited or innovative modes of carrying out the social world’s activities" (p. 122). ICTs are often referred to in the literature of LIS, knowledge management, education, and other fields without explicit definition, and there is no one historical source all uses stem from. Remaining compatible with most of this literature and adapting from the definitions of Strauss (1978) and the Oxford English Dictionary ("Technology," 2012), ICTs were operationalized for coding purposes as inherited or innovative processes, methods, techniques, equipment, or systems—developed from the practical application of knowledge—used for carrying out information or communication-related behaviors and activities.

3.7.3. Emergent Worlds

3.7.3.1. convergence.

Convergence is seen in similar light to coherence, defined above as the degree of consistency between different translations and social or information worlds. Convergence was operationalized through the emergence of common characteristics in new social and information worlds (or proto-worlds), to be coded under the definitions given in section 3.7.2.2 above for social norms , social types , information value , information behaviors / activities , and organizations . Coding took place at the level of these characteristics, not for convergence in general; coding was kept separate from that for these characteristics under coherence. If it was unclear whether a new world—of any size or scale—had truly emerged, memos and annotations were made to express the degree of emergence seen in the data.

3.7.3.2. Boundary object as standard

Treatment of LibraryThing and Goodreads as a new, local standard for a new, emergent social or information world was coded in this category, to distinguish it from treatment of the digital libraries as boundary objects within and across existing information worlds ( section 3.7.2.3 ). This will be operationalized under three subcodes, where all coding would take place:

Emergent site : Under the definition of sites given above, cases of LibraryThing or Goodreads serving as an emergent, standard, and influential space, position, or location for information-related activities and behaviors were coded here. Clear evidence of the digital library serving as a new standard site for an emergent world was necessary. This code could be applied alongside the "emergent technology" code below, and in many cases this happened.

Emergent technology / ICT : Under the definition of technologies given above, cases of LibraryThing or Goodreads providing emergent and standard processes, methods, techniques, equipment, or systems—developed from the practical application of knowledge—used for carrying out information or communication-related behaviors and activities in an emergent world were coded here. Clear evidence of the digital library providing or serving as a new standard technology within an emergent world was necessary. This code could be applied alongside the "emergent site" code above.

Emergent boundary object : Cases where LibraryThing or Goodreads served as an emergent, standard boundary object, but not as a site or technology, were coded here. Clear evidence of the digital library serving as such a role was necessary, and clear evidence that it was not serving as a site or technology was required. This code was expected to be rare and in reality was; it was applied only a few times in the content analysis and not at all in the analysis of the interviews. It was included to ensure all cases of LibraryThing or Goodreads serving as a new, standardized boundary object wer captured. This code was considered mutually exclusive with the "emergent site" and "emergent technology / ICT" codes above.

3.8. Data Management

I have kept all data from this study in digital format on my personal laptop computer. Survey data was kept in Microsoft Excel (.xls/.xlsx) format, interview audio in .mp3 format, and messages and interview transcripts in Microsoft Word (.doc/.docx) format. A password protected and encrypted disk image was created and used for all dissertation data, the password known to the researcher but no one else. Within this image, separate folders were created for each phase of the study. All data analyzed using the coding scheme discussed in section 3.7 above—including messages, interview transcripts, and notes—was also kept in an NVivo project (.nvp) file at the top level within the image. This disk image will be kept until the date arrives for destruction of records from this dissertation.

Filenames for data served and continue to serve as metadata, reflecting the source of the data (participant pseudonym or group name for individual data, phase name for collated results), the date it was collected, the digital library the data refers to (LibraryThing or Goodreads), and the type of data it represents (e.g. thread, survey response, interview transcript, interview notes, preliminary analysis). For example, bob_GR_transcript_022914 . doc could be the filename for the transcript—in Microsoft Word format—of an interview with "Bob," a Goodreads user, conducted on the fictional date February 29, 2014. Three additional spreadsheets (in Microsoft Excel format) were created to provide metadata. Two—one for LibraryThing and one for Goodreads—link participants’ names and e-mail addresses to their psuedonyms; the other has kept track of survey data for interviewees, and was used during interview recruitment to help determine who would be invited to participate.

Encrypted and password-protected backups of all research data have been made on a weekly basis (with rare exceptions due to travel) onto an external hard drive kept at the researcher’s home. Additional encrypted and password-protected backups have and will be made onto recordable CDs or DVDs, to be kept in a filing cabinet belonging to the researcher in the Shores Building on FSU’s main campus or, once the researcher leaves FSU, in a similar secure work location. All research data for this study, including backups, will be deleted and destroyed by April 30 th , 2019 (this date being fewer than five years from the completion of the study). Appropriate excerpts from the data (using pseudonyms) and synthesized data analysis, findings, and conclusions—including the completed dissertation, journal articles, and conference papers—may be shared with other researchers, scholars, and the general public up to and beyond the date given above. Future research data and findings building on the data collected and conclusions drawn during this study may be shared with other researchers, scholars, and the general public, subject to restrictions put in place by the researcher’s home institution and funding source(s) at the time of such research.

3.9. Validity, Reliability, and Trustworthiness

3.9.1. holistic: mixed methods, case studies.

The validity and reliability of mixed methods studies can be assessed in two ways (Creswell & Plano Clark, 2011). One can look at the research as a whole, considering the study’s design, interrelations, and how everything fits together to ensure high levels of validity and reliability. Towards this view, Creswell and Plano Clark provided a list of potential validity threats in mixed methods research and strategies for minimizing these threats (pp. 242–243), which have been followed throughout the design and execution of this research.

Yin (2003) provided similar guidance for case study designs, summarized in his Figure 2.3 (p. 34). Each of these has been implemented in this study as follows:

"Use multiple sources of evidence": Three different methods of data collection have been used, each sampling across different groups and users from LibraryThing and Goodreads.

"Establish chain of evidence": The methods were linked together and informed each other. Data from content analysis helped inform the survey instrument, while the content analysis and survey data helped inform the interview instrument, process, and analysis. Data from all three methods has been tied together in the overall findings and conclusions from the study (see Chapter 5 ).

"Have key informants review draft case study report": While this specific technique was not used, I confirmed with interviewees that my impression of the critical incident they shared was accurate prior to the conclusion of each interview. Participants who requested a report of the findings on completion will receive one within a few weeks after defense of this dissertation.

"Do pattern-matching": Here Yin refers to looking for "several pieces of information from the same case [that] may be related to some theoretical proposition" (p. 26). This study achieved this by maintaining a consistent focus on the same phenomena throughout all three phases and using the same themes—based on the theoretical framework developed in section 2.8 —for coding the messages (in the content analysis phase) and interview transcripts (in the interview phase).

"Do explanation-building": Here Yin refers to establishing a cause-and-effect relationship between patterns in data and theoretical propositions. The pattern-matching above, combined with the theoretical framework discussed in section 2.8 and the philosophical and epistemological viewpoint provided by social informatics and social constructionism, allowed such explanations to be developed through synthesis of data from all three phases (see Chapter 5 , sections 5.1 and 5.2 ).

"Address rival explanations": While I admit favoring the theories used in the theoretical framework developed in section 2.8 , other theories related to communities, collaboration, information behavior, and knowledge management—reviewed elsewhere in Chapter 2 —could have provided a better explanation. The existing literature in these areas and my knowledge of them is used in later sections of Chapter 5 to address possibilities beyond the theoretical framework that relate to the findings seen here.

"Use logic models": Due to limitations of this study (see Chapter 5 , section 5.7 ), a visual model may be premature at this point. I may develop figures, diagrams, and other visual aids to help present the findings as part of posters, conference papers, journal articles, and research presentations.

"Use theory in single-case studies; use replication logic in multiple-case studies": While this is a multiple-case design, only two cases are considered here. Theory—the theoretical framework in section 2.8 —and replication logic—multiple groups and two digital libraries—have played important roles in the design and execution of this dissertation study.

"Use case study protocol": Constraints placed on procedures by the two sites were unavoidable, but where possible the same procedures were used for LibraryThing and Goodreads. Messages were collected and analyzed the same way; surveys distributed, collected, and analyzed the same way; and interviews followed the same themes and procedures. The extra requirement to obtain the consent of group moderators put in place by Goodreads prior to collecting messages and survey responses from users of that digital library did not cause great differences in the data collected or its comparability with that from LibraryThing groups. The researcher took care to document the study as it proceeded, including deviations in procedures that became necessary; the most notable of these was the need to vary the intended statistics and accept greater limitations on the survey results than were at first intended, as discussed above and in Chapter 4 , section 4.2 .

"Develop case study database": Given few cases in this study, a formal database was not constructed. The data management procedures discussed in section 3.8 and NVivo qualitative analysis software—which runs on a Microsoft SQL Server database—provided similar benefits to Yin’s recommendation here.

While holistic consideration of validity and reliability is useful, a second approach is necessary: examining the validity and reliability of each phase of a mixed-methods study—quantitative and qualitative—as an individual method. Each type of research has "specific types of validity checks" to perform (Creswell & Plano Clark, 2011, p. 239), since—despite the continuum mentioned by Ridenour and Newman (2008)—different methods require different measures of their reliability and validity. The two sections below take this approach and apply it to the quantitative—survey—and qualitative—content analysis and interview—phases of the dissertation study conducted here.

3.9.2. Quantitative: Survey

Validity and reliability for quantitative research are given substantial treatment in research methods textbooks, such as Schutt (2009, pp. 130–141) and Babbie (2007, pp. 143–149). The validity of the survey data can be broken down by the different types of validity these and other authors identify as used for quantitative research:

Face validity (Babbie, 2007, p. 146; Schutt, 2009, p. 132): Given that the survey questions were developed from the theories discussed in Chapter 2 and the theoretical framework developed in section 2.8 , each of which have face validity, the questions are judged to have met face validity for measuring the phenomena in question.

Measurement validity (Schutt, 2009, pp. 130–132): The survey questions were looked over by the researcher and his supervisory committee to ensure they did not suffer from idiosyncratic errors due to lack of understanding or unique feelings; from generic errors caused by outside factors; and from method factors such as unbalanced response choices or unclear questions. Attention paid to other kinds of validity helps improve measurement validity.

Content validity (Babbie, 2007, p. 147; Schutt, 2009, p. 132): Using multiple scales and multiple questions per scale helped the questions cover "the full range of [each] concept’s meaning" (p. 132) and the full range of the roles of LibraryThing and Goodreads in the social and information worlds of their users. The content analysis and interviews provided data from fewer users, but much thicker description of the phenomena of interest, as one would expect from qualitative research methods.

Criterion validity (Babbie, 2007, pp. 146–147; Schutt, 2009, pp. 132–134): This is difficult to measure here because no survey-based measures are known to have been developed for the theory of information worlds or boundary object theory prior to this study, and the social worlds perspective makes rare use of surveys. Schutt stated that "for many concepts of interest to social scientists, no other variable can reasonably be considered a criterion" (p. 134); Babbie (2007, p. 147) advocated using construct validity in these cases instead. Fowler (2002, p. 89) made a similar argument for questions "about subjective states, feelings, attitudes, and opinions," believing "there is no objective way of validating the answers … [they] can be assessed only by their correlations with other answers," through construct validity.

Construct validity (Babbie, 2007, p. 147; Schutt, 2009, pp. 134–135): Most of the measures used in the survey significantly correlated with each other, as one would expect given their relations to each other in the social worlds perspective and the theory of information worlds.

Reliability (Babbie, 2007, pp. 143–146; Schutt, 2009, pp. 135–138): While the survey was not repeated by each participant, using multiple measures of each concept and triangulation of the findings via the content analysis and interview phases of the study served a similar role to measures of test-retest or pre- and post-test reliability in an experimental design. The reliability of the scales was analyzed, while the randomization of survey questions (except the demographic questions) helped improve reliability.

3.9.3. Qualitative: Content Analysis and Interviews

A few qualitative and mixed methods researchers hold to positivistic treatments of validity and reliability, requiring use of quantitative measures such as intercoder percentage agreement, Holsti’s (1969) coefficient of reliability, Cohen’s (1960) kappa, or Krippendorf’s (2004b) alpha. Most qualitative researchers, however, argue validity and reliability should not be ported over from quantitative to qualitative research with no changes, nor ignored; instead they must be adapted and changed to fit the naturalistic and ethnographic nature of most qualitative research (Gaskell & Bauer, 2000; Golafshani, 2003; Kvale & Brinkmann, 2009; Lincoln & Guba, 1985b; Ridenour & Newman, 2008). Which adaptations and changes should be put into place for qualitative research is the subject of debate (Golafshani, 2003). Golafshani found "credibility, … confirmability, … dependability … transferability," and "trustworthiness"—the last term preferred by Lincoln and Guba (1985b)—to be the most often terms used to describe the validity of qualitative research. No matter what term is chosen, validity is "inescapably grounded in the processes and intentions of particular [qualitative] research methodologies and projects" (Winter, 2000, p. 1, as cited in Golafshani, 2003, p. 602). Dependability and trustworthiness were the closest linked to reliability in qualitative research by Golafshani (p. 601) and Lincoln and Guba (1985b).

This dissertation research study, while drawing from all of the sources cited above, adapted the criteria and techniques cited by Gaskell and Bauer (2000) and Lincoln and Guba (1985b) for ensuring the validity and reliability of the qualitative phases of the study. These are discussed below, following four broader categories of trustworthiness outlined by Lincoln and Guba.

3.9.3.1. Credibility

The sequential, multiphase design allowed for prolonged engagement with the environment—19 months from prospectus defense to dissertation defense—and persistent, detailed observation of the phenomena under consideration. Using an approach for coding and analysis similar to the constant comparative method of grounded theory (Charmaz, 2006; Strauss & Corbin, 1994) helped ensure breadth and depth. Methods were triangulated via the sequential, multiphase design, where each method reflexively informed and was informed by the others and the theoretical framework developed in section 2.8 . The theoretical framework provides two perspectives—the lenses of the social worlds perspective and the theory of information worlds—that were triangulated in analysis, and the researcher was and is familiar with other social theories, models, and concepts of information and information behavior, some of which apply to the findings (see the later sections of Chapter 5 ). Triangulation of multiple investigators was difficult given the individual nature of a dissertation project, but the input of the dissertation committee and the researcher’s colleagues was considered and welcomed at appropriate stages. Using member checking in the interview process and later methods in the sequential design to check earlier ones led to greater credibility for the study and produced a high level of communicative validity.

Statistical intercoder reliability testing, while used during the pilot testing of the content analysis procedures, was later and is now considered less appropriate for this study; the combination of theories incorporated in the theoretical framework was being used for the first time, and as such the coding scheme and framework should be considered at least somewhat emergent. The coding scheme and procedures are acknowledged to have been quite complex. Statistics such as Cohen’s (1960) kappa or Krippendorff’s (2004) alpha are not very compatible with this exploratory study, using an emergent framework, and following an interpretive approach to analysis (Ahuvia, 2001). The pilot testing of the content analysis procedures, incorporating intercoder reliability testing with Cohen’s kappa, showed that reaching high statistical levels of intercoder reliability would require extensive training of other coders—difficult if not impossible in dissertation research—and much fine-tuning of rules and procedures, fine-tuning that might be appropriate for a non-dissertation, post-positivistic study, but does not mesh with the interpretive and social constructionist paradigms in use here nor fit with the nature and resources of dissertation research. Intracoder reliability testing was performed, using percent agreement and Cohen’s kappa, for the content analysis and interviews; this is reported in Chapter 4 at the beginning of each section of findings. Stressing of the other measures discussed here to address credibility and qualitative trustworthiness is believed to have been enough to overcome any limitations caused by not using intercoder reliability statistics.

3.9.3.2. Transferability

Every effort was made in the prospectus to be transparent in how the research would be conducted, and such transparency carried over to the research and to writing this dissertation. The data collection for the content analysis and interview phases was constructed to provide valid and complete results, from reaching saturation, leading to insightful analysis; this has occurred. As seen in Chapters 4 and 5, the data allow for thick description (Geertz, 1973) of the phenomena in context, taken from messages and interview transcripts, which can allow other researchers to assess the potential transferability of the research findings to other settings.

3.9.3.3. Dependability

As discussed above, every effort has been made to be transparent in the conduct of this research. The data collection for the content analysis and interview phases provides valid and complete results, having reached saturation, leading to insightful analysis. I remained transparent with users who were surveyed and interviewed, disclosing the full and true purpose of the study and not engaging in deception. Using participants whose survey or content analysis data indicated they would provide interest and insight in an interview helped satisfy Gaskell and Bauer’s call for revealing and relevant findings, and I feel what is found in Chapters 4 and 5 also fits. By ensuring saturation was reached in the interviews, the dependability of the study is increased further. While the inquiry audit suggested by Lincoln and Guba was not implemented for this study, the process of defending the prospectus and dissertation and the guidance of the dissertation committee throughout the process has served a similar purpose.

3.9.3.4. Confirmability

The data analysis process included memoing, annotating, and note taking at appropriate moments, including reflective comments on the data and the researcher’s experience. The researcher noted any and all reflective comments on the research study, theoretical framework, data collection process, and data analysis process during all phases of the project. Triangulation (as discussed above) helped ensure confirmability. While the formal confirmability audit suggested by Lincoln and Guba—examining if findings, interpretations, and recommendations are supported by the data—was not implemented for this study, the process of defending the dissertation serves a similar purpose.

3.10. Ethical Considerations

This study is not known to have violated any ethical principles or procedures. The content analysis phase used messages accessible to the public, posted in LibraryThing and Goodreads groups, as its source of data. The identities of the users who posted each message remains confidential. Usernames have been used to allow for identifying common message authors in a thread, for analysis of the flow of conversation, and for identifying potential participants for later phases of the study, but have not been and will not be part of further analysis, results, and publications. Identities have remained confidential throughout the survey and interview phases of the study, and will continue to do so after a defended dissertation. Psuedonyms have been and will continue to be used in any published or unpublished reports of the results and conclusions, and any other data or information with the potential to identify participants to people familiar with them has been altered for the purposes of this dissertation and future presentation and publication.

Informed consent was obtained from participants in the survey and interview phases, before they completed the survey instrument or participated in the main portion of the interview, and—as required by Goodreads for use of their digital library as a setting for this research (see Appendix A , section A.1 )—from the moderators of Goodreads groups. Their participation was voluntary; any participant who wished not to complete the survey or be interviewed, or wanted to request an interview be stopped or their survey data be deleted, would have been accommodated and allowed to not take part in or withdraw from the study. Moderators had the same right when it came to deciding if their group would take part in the study as a whole. No users or moderators who had previously consented expressed feeling uncomfortable and wishing to withdraw. Some moderators and potential interviewees did not respond to invitations, and one potential interviewee did not show up for her interview time and never responded to inquiries, but it is unclear why she chose to withdraw or why others were not interested in—in some cases further—participation. If any participants wish to withdraw their data from the study in the future, after already completing the survey or having been interviewed, their survey results, interview transcript, interview audio recording, and notes taken by the researcher after their interview will be removed from the data collected and analyzed as best as is possible, although their data will have already been analyzed and affected the conclusions drawn from data analysis (seen in Chapter 5 ). This is an unavoidable consequence and will be dealt with as best as possible by the researcher, should it occur.

On the opposite end of the research lifecycle, in two of the LibraryThing groups—which will not be named to maintain confidentiality and not "rock the boat" where it is unnecessary—a small number of users (five to ten) responded to the survey invitation post with comments disliking the survey instrument or facing confusion over the questions asked. I answered the questions and queries as best as possible without causing excessive bias in the survey results, but there was not much that could be done to please some users. They were, strictly speaking, not expressing any uncomfortable feelings—if anything they made me more uncomfortable than my survey had done to them—but this is worth noting as a negative reaction. It was not the norm; most participants were happy to complete the survey without incident, and no harm or risks occurred to any participants, greater than those experienced in everyday life, as a result of viewing or completing the survey or participating in the research in other ways.

The study was explained to participants in all letters they received, at the beginning of the survey in the informed consent statement, in the interview informed consent statement, and in verbal form at the beginning of the interview; see Appendix A for the letter and consent forms. As such, participants should have had complete awareness of the potential risks (or lack thereof) and benefits, that their participation was and is voluntary, and of the compensation provided, before giving their informed consent for each phase of the data collection. Participants were not deceived in any way at any point during this study. The potential benefits to the participants, as users of the LibraryThing or Goodreads digital libraries, were great enough to outweigh any small possibility of harm or any risks discussed above. The identity and affiliation of the researcher was known to all prospective participants via the invitation letters and informed consent statements, and the purpose of the interview and reasoning behind it was reiterated to each interview participant at the start of their interview. There were no issues seen with the researcher (as interviewer) maintaining appropriate boundaries with participants during the interview phase of the study.

The FSU Human Subjects Committee, an institutional review board (IRB), approved this study, including the pilot test of the content analysis phase. Documentation of this approval can be found in Appendix E , section E.3 .

3.11. Conclusion

This chapter has presented the details of the method and procedures for this dissertation research study. The use of content analysis, a survey questionnaire, and semi-structured interviews in sequence within a mixed methods research design addressed the purpose of the research: to improve understanding of the organizational, cultural, institutional, collaborative, and social contexts of digital libraries. As stated in Chapter 1 and shown in Chapter 2 , these contexts have important effects on users, communities, and information behavior. There is a clear need for theoretical and practical research into the roles digital libraries play within, between, and across communities, social worlds, and information worlds. This study helps satisfy that need.

The research design is well-grounded in epistemology and theory, previous research, and previous and existing practice; Chapter 2 provides this necessary context. The study operates under the tenets of the social paradigm, social informatics, and social constructionism, and incorporates boundary object theory, the social worlds perspective, and the theory of information worlds into its theoretical framework. This design has allowed for data to be collected and analyzed, at multiple levels and using multiple methods, on the roles that LibraryThing and Goodreads, two cases of social digital libraries, play as boundary objects in translation, coherence, and convergence between existing and of emergent social and information worlds. Chapter 4 turns to presenting the findings from this data and analysis of it, with Chapter 5 providing greater synthesis and discussion of the findings, implications, and conclusions of this research.

The FSU iSchool was known at the time as the School of Library and Information Studies; for simplicity the newer name (which took effect in early 2014) will be used to refer to this entity in this dissertation. The older name is still present on the invitation letters and consent forms as approved by FSU’s Human Subjects Committee in Appendix A . ↩︎

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

Prevent plagiarism. Run a free check.

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved June 10, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

American Psychological Association

Reference Examples

More than 100 reference examples and their corresponding in-text citations are presented in the seventh edition Publication Manual . Examples of the most common works that writers cite are provided on this page; additional examples are available in the Publication Manual .

To find the reference example you need, first select a category (e.g., periodicals) and then choose the appropriate type of work (e.g., journal article ) and follow the relevant example.

When selecting a category, use the webpages and websites category only when a work does not fit better within another category. For example, a report from a government website would use the reports category, whereas a page on a government website that is not a report or other work would use the webpages and websites category.

Also note that print and electronic references are largely the same. For example, to cite both print books and ebooks, use the books and reference works category and then choose the appropriate type of work (i.e., book ) and follow the relevant example (e.g., whole authored book ).

Examples on these pages illustrate the details of reference formats. We make every attempt to show examples that are in keeping with APA Style’s guiding principles of inclusivity and bias-free language. These examples are presented out of context only to demonstrate formatting issues (e.g., which elements to italicize, where punctuation is needed, placement of parentheses). References, including these examples, are not inherently endorsements for the ideas or content of the works themselves. An author may cite a work to support a statement or an idea, to critique that work, or for many other reasons. For more examples, see our sample papers .

Reference examples are covered in the seventh edition APA Style manuals in the Publication Manual Chapter 10 and the Concise Guide Chapter 10

Related handouts

  • Common Reference Examples Guide (PDF, 147KB)
  • Reference Quick Guide (PDF, 225KB)

Textual Works

Textual works are covered in Sections 10.1–10.8 of the Publication Manual . The most common categories and examples are presented here. For the reviews of other works category, see Section 10.7.

  • Journal Article References
  • Magazine Article References
  • Newspaper Article References
  • Blog Post and Blog Comment References
  • UpToDate Article References
  • Book/Ebook References
  • Diagnostic Manual References
  • Children’s Book or Other Illustrated Book References
  • Classroom Course Pack Material References
  • Religious Work References
  • Chapter in an Edited Book/Ebook References
  • Dictionary Entry References
  • Wikipedia Entry References
  • Report by a Government Agency References
  • Report with Individual Authors References
  • Brochure References
  • Ethics Code References
  • Fact Sheet References
  • ISO Standard References
  • Press Release References
  • White Paper References
  • Conference Presentation References
  • Conference Proceeding References
  • Published Dissertation or Thesis References
  • Unpublished Dissertation or Thesis References
  • ERIC Database References
  • Preprint Article References

Data and Assessments

Data sets are covered in Section 10.9 of the Publication Manual . For the software and tests categories, see Sections 10.10 and 10.11.

  • Data Set References
  • Toolbox References

Audiovisual Media

Audiovisual media are covered in Sections 10.12–10.14 of the Publication Manual . The most common examples are presented together here. In the manual, these examples and more are separated into categories for audiovisual, audio, and visual media.

  • Artwork References
  • Clip Art or Stock Image References
  • Film and Television References
  • Musical Score References
  • Online Course or MOOC References
  • Podcast References
  • PowerPoint Slide or Lecture Note References
  • Radio Broadcast References
  • TED Talk References
  • Transcript of an Audiovisual Work References
  • YouTube Video References

Online Media

Online media are covered in Sections 10.15 and 10.16 of the Publication Manual . Please note that blog posts are part of the periodicals category.

  • Facebook References
  • Instagram References
  • LinkedIn References
  • Online Forum (e.g., Reddit) References
  • TikTok References
  • X References
  • Webpage on a Website References
  • Clinical Practice References
  • Open Educational Resource References
  • Whole Website References

👀 Turn any prompt into captivating visuals in seconds with our AI-powered design generator ✨ Try Piktochart AI!

  • Piktochart Visual
  • Video Editor
  • AI Design Generator
  • Infographic Maker
  • Banner Maker
  • Brochure Maker
  • Diagram Maker
  • Flowchart Maker
  • Flyer Maker
  • Graph Maker
  • Invitation Maker
  • Pitch Deck Creator
  • Poster Maker
  • Presentation Maker
  • Report Maker
  • Resume Maker
  • Social Media Graphic Maker
  • Timeline Maker
  • Venn Diagram Maker
  • Screen Recorder
  • Social Media Video Maker
  • Video Cropper
  • Video to Text Converter
  • Video Views Calculator
  • AI Brochure Maker
  • AI Document Generator
  • AI Flyer Generator
  • AI Image Generator
  • AI Infographic
  • AI Instagram Post Generator
  • AI Newsletter Generator
  • AI Quote Generator
  • AI Report Generator
  • AI Summarizer
  • AI Timeline Generator
  • For Communications
  • For Education
  • For eLearning
  • For Financial Services
  • For Healthcare
  • For Human Resources
  • For Marketing
  • For Nonprofits
  • Brochure Templates
  • Flyer Templates
  • Infographic Templates
  • Newsletter Templates
  • Presentation Templates
  • Resume Templates
  • Business Infographics
  • Business Proposals
  • Education Templates
  • Health Posters
  • HR Templates
  • Sales Presentations
  • Community Template
  • Explore all free templates on Piktochart
  • Course: What is Visual Storytelling?
  • The Business Storyteller Podcast
  • User Stories
  • Video Tutorials
  • Need help? Check out our Help Center
  • Earn money as a Piktochart Affiliate Partner
  • Compare prices and features across Free, Pro, and Enterprise plans.
  • For professionals and small teams looking for better brand management.
  • For organizations seeking enterprise-grade onboarding, support, and SSO.
  • Discounted plan for students, teachers, and education staff.
  • Great causes deserve great pricing. Registered nonprofits pay less.

AI-Powered Poster Generator

With the Piktochart AI poster generator, you can turn any prompt into a gorgeous poster in seconds. No design skills? No problem. Just tweak it as you wish, then share your poster.

The new way of creating posters

Create in a Flash

Prompt to poster in 10 seconds

Say goodbye to complicated design steps. Simply type in your theme and watch as our AI poster maker transforms it into reality.

AI Image Generation

Bring your vision to life

Go stock photos and generate images that are contextualized to your needs with our advanced AI image generator. Put your creativity to the test and generate highly realistic images that make you stand out.

piktochart ai poster templates

Create Without Limits

Where every idea finds its canvas

For events, marketing, learning, or personal creations, Piktochart AI delivers captivating poster designs for every need. Dive into a universe of impressive imagery tailored to suit any subject.

posters for branding using piktochart ai

Create Your Vision

Piktochart starts, you put the finishing touches

Our AI sets the stage with a professionally crafted poster, then passes control to you, allowing you to modify and refine each detail to amplify your visual impact while keeping true to your brand.

Posters created using Piktochart’s AI-powered poster maker

work conference poster template by piktochart ai

Professionals like you use Piktochart’s free online poster maker to:

marketer

  • Create eye-catching promotional materials that align with brand identity, ideal for advertising campaigns, product launches, and trade shows.
  • Design captivating posters for corporate events, webinars, and conferences.
  • Communicate new offers, services, or store openings.

SMEs and enterprises persona

HR & Internal Comms

  • Internal announcements, motivational quotes, or event notifications.
  • Job advertisements and onboarding materials to attract and welcome new employees.
  • Convey important company policies and reminders through clear, engaging posters, ensuring better compliance and awareness.

NGOs and government persona

NGOs and Government Organizations

  • Develop impactful posters for awareness drives, fundraising events, and community outreach programs.
  • Attract volunteers, highlighting the roles, benefits, and the difference they can make.
  • Announce charity events, workshops, and seminars.

business owner

  • Create informative posters on health topics, wellness tips, and medical advisories.
  • Showcase healthcare services, specialist departments, and new medical technologies available at healthcare facilities.
  • Display important health and safety protocols within healthcare settings.

How to Make a Digital Poster

1. Define Your Story

Briefly describe (within 120 characters) the purpose behind your poster. Whether it’s for promotion, making an announcement, driving awareness, or sharing health information.

2. Select from Our Varied Poster Designs

Jumpstart your project with our array of ready-to-use poster templates, perfect for shining a spotlight on any subject. After picking your preferred design, you’ll find yourself in our editing suite.

3. Tweak the Design with Piktochart Editor

With your template chosen, hitting the “Edit” button grants you entry into the Piktochart editor. This is your playground to adjust, alter, and align the design to reflect your personal touch and message.

4. Enhance with Visual Elements

Piktochart’s user-friendly drag-and-drop editor makes personalization a breeze. Tap into our rich collection of complimentary photos, icons, illustrations, and text options to craft a poster that stands out. Enhancing and tailoring colors is just a click away with our versatile design tool.

5. Publish and Promote

Once your poster is exactly as you envisioned, it’s time to save and share your work. Export in various formats like JPG, PNG, or PDF, catering to both digital platforms and print materials.

AI-Powered Visualization for Any Topic

What kinds of posters can be generated using this AI tool?

Navigating design elements and finding the right visual style can be daunting. With Piktochart AI, it’s easy to transform data into high-quality posters . Excellence made simple, just for you.

Event posters

Drum up buzz and awareness for an upcoming event. Piktochart AI transform dense data and information into engaging invitational posters for your events.

Advertising posters

Spark emotions that incite action – whether it is to make a purchase, improve brand opinion, donate to a cause, or make a lifestyle change. With Piktochart AI, it’s achievable at the click of a button.

Conference posters

Inform your audience with a glance about an upcoming conference. Whether it’s for a medical conference, marketing conference, or any conferences, Piktochart AI’s user-friendly poster maker helps you catch the attention of your audience effortlessly.

Ready to use AI to design posters like a pro?

Join more than 11 million people who already use Piktochart to create stunning posters.

Is it possible to personalize my poster with my own photos and diagrams?

What’s the limit on poster creation, how do i enhance the quality of my posters, is signing up mandatory to use piktochart, poster resources.

how to make a poster, how to make an eye-catching and effective poster

How to Make a Poster in 6 Easy Steps [2023 Guide With Templates]

featured image for poster ideas and templates

25 Poster Ideas, Templates, and Tips for Creative Inspiration

types of posters

Communications

7 Types of Posters and What Makes Them Stand Out

What else can you create with piktochart ai.

Grad Coach (R)

What’s Included: Introduction Template

This template covers all the core components required in the introduction chapter/section of a typical dissertation or thesis, including:

  • The opening section
  • Background of the research topic
  • Statement of the problem
  • Rationale (including the research aims, objectives, and questions)
  • Scope of the study
  • Significance of the study
  • Structure of the document

The purpose of each section is clearly explained, followed by an overview of the key elements that you need to cover. We’ve also included practical examples to help you understand exactly what’s required, along with links to additional free resources (articles, videos, etc.) to help you along your research journey.

The cleanly formatted Google Doc can be downloaded as a fully editable MS Word Document (DOCX format), so you can use it as-is or convert it to LaTeX.

PS – if you’d like a high-level template for the entire thesis, you can we’ve got that too .

Thesis Introduction FAQS

What types of dissertations/theses can this template be used for.

The template follows the standard format for academic research projects, which means it will be suitable for the vast majority of dissertations and theses (especially those within the sciences), whether they are qualitative or quantitative in terms of design.

Keep in mind that the exact requirements for the introduction chapter/section will vary between universities and degree programs. These are typically minor, but it’s always a good idea to double-check your university’s requirements before you finalize your structure.

Is this template for an undergrad, Master or PhD-level thesis?

This template can be used for a dissertation, thesis or research project at any level of study. Doctoral-level projects typically require the introduction chapter to be more extensive/comprehensive, but the structure will typically remain the same.

Can I share this template with my friends/colleagues?

Yes, you’re welcome to share this template in its original format (no editing allowed). If you want to post about it on your blog or social media, we kindly request that you reference this page as your source.

What format is the template (DOC, PDF, PPT, etc.)?

The dissertation introduction chapter template is provided as a Google Doc. You can download it in MS Word format or make a copy to your Google Drive. You’re also welcome to convert it to whatever format works best for you, such as LaTeX or PDF.

What is the core purpose of this chapter?

The introduction chapter of a dissertation or thesis serves to introduce the research topic, clearly state the research problem, and outline the main research questions. It justifies the significance of the study, delineates its scope, and provides a roadmap of the dissertation’s structure.

In a nutshell, the introduction chapter sets the academic tone and context, laying the foundation for the subsequent analysis and discussion.

How long should the introduction chapter be?

This depends on the level of study (undergrad, Master or Doctoral), as well as your university’s specific requirements, so it’s best to check with them. As a general ballpark, introduction chapters for Masters-level projects are usually 1,500 – 2,000 words in length, while Doctoral-level projects can reach multiples of this.

How specific should the research objectives be in the introduction chapter?

In this chapter, your research objectives should be specific enough to clearly define the scope and direction of your study, but broad enough to encompass its overall aims.

Make sure that each objective can be realistically accomplished within the scope of your study and that each objective is directly related to and supports your research question(s).

As a rule of thumb, you should leave in-depth explanations for later chapters; the introduction should just provide a concise overview.

Can I mention the research results in the introduction?

How do i link the introduction to the literature review.

To transition smoothly from the introduction chapter to the literature review chapter in a thesis, it’s a good idea to:

  • Conclude the introduction by summarising the main points, such as the research problem, objectives, and significance of your study.
  • Explicitly state that the following chapter (literature review) will explore existing research and theoretical frameworks related to your topic.
  • Emphasise how the literature review will address gaps or issues identified in the introduction, setting the stage for your research question or hypothesis.
  • Use a sentence that acts as a bridge between the two chapters. For example, “To further understand this issue, the next chapter will critically examine the existing literature on [your topic].”

This approach will help form a logical flow and prepare the reader for the depth and context provided in the literature review.

Do you have templates for the other chapters?

Yes, we do. We are constantly developing our collection of free resources to help students complete their dissertations and theses. You can view all of our template resources here .

Can Grad Coach help me with my dissertation/thesis?

Yes, you’re welcome to get in touch with us to discuss our private coaching services .

Free Webinar: Literature Review 101

IMAGES

  1. Sample chapter 3 thesis writing

    chapter 3 thesis template

  2. Chapter 3 template for students in their research

    chapter 3 thesis template

  3. How to Create a Master's Thesis Outline: Sample and Tips (2023)

    chapter 3 thesis template

  4. Chapter 3 Thesis Methodology Sample

    chapter 3 thesis template

  5. Chapter 3 Thesis

    chapter 3 thesis template

  6. thesis chapter template

    chapter 3 thesis template

VIDEO

  1. 19- How to write chapter 3 of master thesis _ كيف تكتب الفصل الثالث في رسالة الماجستير؟

  2. Qualitative Chapter 3

  3. Writing That PhD Thesis

  4. Master's Thesis Structure (5 Main Chapters)

  5. Thesis Writing: Outlining Part III

  6. WRITING THE CHAPTER 3|| Research Methodology (Research Design and Method)

COMMENTS

  1. Dissertation & Thesis Outline

    To help you get started, we've created a full thesis or dissertation template in Word or Google Docs format. It's easy adapt it to your own requirements. ... The methods used in the study are then described in Chapter 3, after which the results are presented and discussed in Chapter 4. Sample verbs for variation in your chapter outline.

  2. PDF CHAPTER III: METHOD

    Dissertation Chapter 3 Sample. be be 1. Describe. quantitative, CHAPTER III: METHOD introduce the qualitative, the method of the chapter and mixed-methods). used (i.e. The purpose of this chapter is to introduce the research methodology for this. methodology the specific connects to it question(s). research.

  3. PDF SUGGESTED DISSERTATION OUTLINE

    CHAPTER 1: INTRODUCTION This chapter introduces and provides an overview of the research that is to be undertaken. Parts of Chapter 1 summarize your Chapters 2 and 3, and because of that, Chapter 1 normally should be written after Chapters 2 and 3. Dissertation committee chairs often want students to provide a 5-10 page overview of their proposed

  4. Free Dissertation & Thesis Template (Word Doc & PDF)

    The template structure reflects the overall research process, ensuring your dissertation or thesis will have a smooth, logical flow from chapter to chapter. The dissertation template covers the following core sections: The title page/cover page; Abstract (sometimes also called the executive summary) Table of contents; List of figures/list of tables

  5. Free Thesis Methodology Template (+ Examples)

    This template covers all the core components required in the research methodology chapter or section of a typical dissertation or thesis, including: The purpose of each section is explained in plain language, followed by an overview of the key elements that you need to cover. The template also includes practical examples to help you understand ...

  6. Free Dissertation & Thesis Templates

    The full dissertation/thesis template provides a high-level outline structure, whereas the individual chapter templates provide more detail. If you're just starting the writing process, the former could help you structure your outline document and get a feel for how it all fits together, whereas the latter (chapter-specific templates) can be used as you approach each chapter.

  7. University Thesis and Dissertation Templates

    University Thesis and Dissertation Templates. Theses and dissertations are already intensive, long-term projects that require a lot of effort and time from their authors. Formatting for submission to the university is often the last thing that graduate students do, and may delay earning the relevant degree if done incorrectly.

  8. PDF Presenting Methodology and Research Approach

    The dissertation's third chapter—the metho-dology chapter—covers a lot of ground. In this chapter, you document each step that you have taken in designing and conducting the study. The format that we present for this chapter covers all the necessary components of a comprehensive methodology chapter. Universities generally have their own fixed

  9. Templates

    UCI Libraries maintains the following templates to assist in formatting your graduate manuscript. If you are formatting your manuscript in Microsoft Word, feel free to download and use the template. ... Editable template of the Master's thesis formatting. PDF Thesis Template 2024. Word: Dissertation Template 2024. Editable template of the PhD ...

  10. PDF 3 Methodology

    The Methodology chapter is perhaps the part of a qualitative thesis that is most unlike its equivalent in a quantitative study. Students doing quantitative research have an established ... Chapter 3. Research methodology and method 3.0 Introduction 3.1 Methodology 3.1.1 Method of sampling 3.1.2 Organisation of data 3.1.3 Contextualisation

  11. PDF Writing Chapter 3 Chapter 3: Methodology

    Instruments. This section should include the instruments you plan on using to measure the variables in the research questions. (a) the source or developers of the instrument. (b) validity and reliability information. •. (c) information on how it was normed. •. (d) other salient information (e.g., number of. items in each scale, subscales ...

  12. How to Write Your Dissertation Chapter 3?

    In chapter 3 thesis, which is written in the same way as methodology part of a dissertation, you discuss how you performed the study in great detail. It usually includes the same elements and has a similar structure. You can use the outline example of this section for a dissertation but you should take into account that its structure should ...

  13. Chapter 3

    Introduction. The current chapter presents developing the research methods needed to complete the experimentation portion of the current study. The chapter will discuss in detail the various stages of developing the methodology of the current study. This includes a detailed discussion of the philosophical background of the research method chosen.

  14. Chapter 3: Quantitative Master's Thesis

    A theoretical framework as applicable to the field of study may be included here. Chapter Three. Methods. The methods section is the section that should clearly present each aspect of the process by which the study will be completed. Every attempt should be made to leave no question as to the procedures used to complete the study.

  15. How To Write The Methodology Chapter

    Do yourself a favour and start with the end in mind. Section 1 - Introduction. As with all chapters in your dissertation or thesis, the methodology chapter should have a brief introduction. In this section, you should remind your readers what the focus of your study is, especially the research aims. As we've discussed many times on the blog ...

  16. PDF Chapter 3: Method (Phenomenological Study)

    • Purpose statement must be written exactly the same as it was in Chapter 1. • Introduction should align with subsequent sections of this chapter. Write Your Dissertation In your dissertation template, write your introduction section, addressing each of the following points: • Restate the purpose statement. • Preview what is in Chapter 3.

  17. PDF APA Style Dissertation Guidelines: Formatting Your Dissertation

    Dissertation Content When the content of the dissertation starts, the page numbering should restart at page one using Arabic numbering (i.e., 1, 2, 3, etc.) and continue throughout the dissertation until the end. The Arabic page number should be aligned to the upper right margin of the page with a running head aligned to the upper left margin.

  18. Adam Worrall

    Conclusion. This chapter presents the methods and research design for this dissertation study. It begins by presenting the research questions and settings, the LibraryThing and Goodreads digital libraries. This is followed by an overview of the mixed methods research design used, incorporating a sequence of three phases.

  19. How to Write a Literature Review

    Examples of literature reviews. Step 1 - Search for relevant literature. Step 2 - Evaluate and select sources. Step 3 - Identify themes, debates, and gaps. Step 4 - Outline your literature review's structure. Step 5 - Write your literature review.

  20. Dissertation Structure & Layout 101 (+ Examples)

    Oh and by the way, you can also grab our free dissertation/thesis template here to help speed things up. Title page. ... The core chapters (the "meat" of the dissertation) Chapter 1: Introduction; Chapter 2: Literature review; Chapter 3: Methodology; Chapter 4: Results; Chapter 5: Discussion; Chapter 6: Conclusion;

  21. (PDF) Chapter 3 Research Design and Methodology

    Chapter 3. Research Design and Methodology. Chapter 3 consists of three parts: (1) Purpose of the. study and research design, (2) Methods, and (3) Statistical. Data analysis procedure. Part one ...

  22. Chapter 3

    Points Scale Verbal Interpretation 4 3 - 4 Strongly Agree 3 2 - 3 Agree 2 1 - 2 Disagree 1 1 - 1 Strongly Disagree. Formula Used in Treating the data gathered: x̅ =N∑ M,X + W 2 X + W 3 X + W 4 X + W 5 X. Where: x̅ = weighted mean. x = total number pf respondents per question. N = total number of respondents. W = respective legend point (4 ...

  23. Thesis Discussion Chapter Template (Word Doc + PDF)

    This template covers all the core components required in the discussion/analysis chapter of a typical dissertation or thesis, including: The purpose of each section is explained in plain language, followed by an overview of the key elements that you need to cover. The template also includes practical examples to help you understand exactly what ...

  24. Reference examples

    More than 100 reference examples and their corresponding in-text citations are presented in the seventh edition Publication Manual.Examples of the most common works that writers cite are provided on this page; additional examples are available in the Publication Manual.. To find the reference example you need, first select a category (e.g., periodicals) and then choose the appropriate type of ...

  25. Free AI Poster Maker—Piktochart AI

    2. Select from Our Varied Poster Designs. Jumpstart your project with our array of ready-to-use poster templates, perfect for shining a spotlight on any subject. After picking your preferred design, you'll find yourself in our editing suite. 3. Tweak the Design with Piktochart Editor.

  26. Free Download: Thesis Introduction Template (Word Doc + PDF)

    This template covers all the core components required in the introduction chapter/section of a typical dissertation or thesis, including: The opening section. Background of the research topic. Statement of the problem. Rationale (including the research aims, objectives, and questions) Scope of the study. Significance of the study.