National Academies Press: OpenBook

Global Dimensions of Intellectual Property Rights in Science and Technology (1993)

Chapter: 12 a case study on computer programs, 12 a case study on computer programs.

PAMELA SAMUELSON

HISTORICAL OVERVIEW

Phase 1: the 1950s and early 1960s.

When computer programs were first being developed, proprietary rights issues were not of much concern. Software was often developed in academic or other research settings. Much progress in the programming field occurred as a result of informal exchanges of software among academics and other researchers. In the course of such exchanges, a program developed by one person might be extended or improved by a number of colleagues who would send back (or on to others) their revised versions of the software. Computer manufacturers in this period often provided software to customers of their machines to make their major product (i.e., computers) more commercially attractive (which caused the software to be characterized as "bundled" with the hardware).

To the extent that computer programs were distributed in this period by firms for whom proprietary rights in software were important, programs tended to be developed and distributed through restrictive trade secret licensing agreements. In general, these were individually negotiated with customers. The licensing tradition of the early days of the software industry has framed some of the industry expectations about proprietary rights issues, with implications for issues still being litigated today.

In the mid-1960s, as programs began to become more diverse and complex, as more firms began to invest in the development of programs, and as

some began to envision a wider market for software products, a public dialogue began to develop about what kinds of proprietary rights were or should be available for computer programs. The industry had trade secrecy and licensing protection, but some thought more legal protection might be needed.

Phase 2: Mid-1960s and 1970s

Copyright law was one existing intellectual property system into which some in the mid-1960s thought computer programs might potentially fit. Copyright had a number of potential advantages for software: it could provide a relatively long term of protection against unauthorized copying based on a minimal showing of creativity and a simple, inexpensive registration process. 1 Copyright would protect the work's ''expression," but not the "ideas" it contained. Others would be free to use the same ideas in other software, or to develop independently the same or a similar work. All that would be forbidden was the copying of expression from the first author's work.

In 1964, the U.S. Copyright Office considered whether to begin accepting registration of computer programs as copyrightable writings. It decided to do so, but only under its "rule of doubt" and then only on condition that a full text of the program be deposited with the office, which would be available for public review. 2

The Copyright Office's doubt about the copyrightability of programs

arose from a 1908 Supreme Court decision that had held that a piano roll was not an infringing "copy" of copyrighted music, but rather part of a mechanical device. 3 Mechanical devices (and processes) have traditionally been excluded from the copyright domain. 4 Although the office was aware that in machine-readable form, computer programs had a mechanical character, they also had a textual character, which was why the Copyright Office decided to accept them for registration.

The requirement that the full text of the source code of a program be deposited in order for a copyright in the program to be registered was consistent with a long-standing practice of the Copyright Office, 5 as well as with what has long been perceived to be the constitutional purpose of copyright, namely, promoting the creation and dissemination of knowledge. 6

Relatively few programs, however, were registered with the Copyright Office under this policy during the 1960s and 1970s. 7 Several factors may have contributed to this. Some firms may have been deterred by the requirement that the full text of the source code be deposited with the office and made available for public inspection, because this would have dispelled its trade secret status. Some may have thought a registration certificate issued under the rule of doubt might not be worth much. However, the main reason for the low number of copyright registrations was probably that a mass market in software still lay in the future. Copyright is useful mainly to protect mass-marketed products, and trade secrecy is quite adequate for programs with a small number of distributed copies.

Shortly after the Copyright Office issued its policy on the registrability of computer programs, the U.S. Patent Office issued a policy statement concerning its views on the patentability of computer programs. It rejected the idea that computer programs, or the intellectual processes that might be embodied in them, were patentable subject matter. 8 Only if a program was

claimed as part of a traditionally patentable industrial process (i.e., those involving the transformation of matter from one physical state to another) did the Patent Office intend to issue patents for program-related innovations. 9

Patents are typically available for inventive advances in machine designs or other technological products or processes on completion of a rigorous examination procedure conducted by a government agency, based on a detailed specification of what the claimed invention is, how it differs from the prior art, and how the invention can be made. Although patent rights are considerably shorter in duration than copyrights, patent rights are considered stronger because no one may make, use, or sell the claimed invention without the patent owner's permission during the life of the patent. (Patents give rights not just against someone who copies the protected innovation, but even against those who develop it independently.) Also, much of what copyright law would consider to be unprotectable functional content ("ideas") if described in a book can be protected by patent law.

The Patent Office's policy denying the patentability of program innovations was consistent with the recommendations of a presidential commission convened to make suggestions about how the office could more effectively cope with an "age of exploding technology." The commission also recommended that patent protection not be available for computer program innovations. 10

Although there were some appellate decisions in the late 1960s and

early 1970s overturning Patent Office rejections of computer program-related applications, few software developers looked to the patent system for protection after two U.S. Supreme Court decisions in the 1970s ruled that patent protection was not available for algorithms. 11 These decisions were generally regarded as calling into question the patentability of all software innovations, although some continued to pursue patents for their software innovations notwithstanding these decisions. 12

As the 1970s drew to a close, despite the seeming availability of copyright protection for computer programs, the software industry was still relying principally on trade secrecy and licensing agreements. Patents seemed largely, if not totally, unavailable for program innovations. Occasional suggestions were made that a new form of legal protection for computer programs should be devised, but the practice of the day was trade secrecy and licensing, and the discourse about additional protection was focused overwhelmingly on copyright.

During the 1960s and 1970s the computer science research community grew substantially in size. Although more software was being distributed under restrictive licensing agreements, much software, as well as innovative ideas about how to develop software, continued to be exchanged among researchers in this field. The results of much of this research were published and discussed openly at research conferences. Toward the end of this period, a number of important research ideas began to make their way into commercial projects, but this was not seen as an impediment to research by computer scientists because the commercial ventures tended to arise after the research had been published. Researchers during this period did not, for the most part, seek proprietary rights in their software or software ideas, although other rewards (such as tenure or recognition in the field) were available to those whose innovative research was published.

Phase 3: The 1980s

Four significant developments in the 1980s changed the landscape of the software industry and the intellectual property rights concerns of those who developed software. Two were developments in the computing field; two were legal developments.

The first significant computing development was the introduction to the market of the personal computer (PC), a machine made possible by improvements in the design of semiconductor chips, both as memory storage

devices and as processing units. A second was the visible commercial success of some early PC applications software—most notably, Visicalc, and then Lotus 1-2-3—which significantly contributed to the demand for PCs as well as making other software developers aware that fortunes could be made by selling software. With these developments, the base for a large mass market in software was finally in place.

During this period, computer manufacturers began to realize that it was to their advantage to encourage others to develop application programs that could be executed on their brand of computers. One form of encouragement involved making available to software developers whatever interface information would be necessary for development of application programs that could interact with the operating system software provided with the vendor's computers (information that might otherwise have been maintained as a trade secret). Another form of encouragement was pioneered by Apple Computer, which recognized the potential value to consumers (and ultimately to Apple) of having a relatively consistent "look and feel" to the applications programs developed to run on Apple computers. Apple developed detailed guidelines for applications developers to aid in the construction of this consistent look and feel.

The first important legal development—one which was in place when the first successful mass-marketed software applications were introduced into the market—was passage of amendments to the copyright statute in 1980 to resolve the lingering doubt about whether copyright protection was available for computer programs. 13 These amendments were adopted on the recommendation of the National Commission on New Technological Uses of Copyrighted Works (CONTU), which Congress had established to study a number of "new technology" issues affecting copyrighted works. The CONTU report emphasized the written nature of program texts, which made them seem so much like written texts that had long been protected by copyright law. The CONTU report noted the successful expansion of the boundaries of copyright over the years to take in other new technology products, such as photographs, motion pictures, and sound recordings. It predicted that computer programs could also be accommodated in the copyright regime. 14

Copyright law was perceived by CONTU as the best alternative for protection of computer programs under existing intellectual property regimes. Trade secrecy, CONTU noted, was inherently unsuited for mass-marketed products because the first sale of the product on the open market would dispel the secret. CONTU observed that Supreme Court rulings had cast

doubts on the availability of patent protection for software. CONTU's confidence in copyright protection for computer programs was also partly based on an economic study it had commissioned. This economic study regarded copyright as suitable for protecting software against unauthorized copying after sale of the first copy of it in the marketplace, while fostering the development of independently created programs. The CONTU majority expressed confidence that judges would be able to draw lines between protected expression and unprotected ideas embodied in computer programs, just as they did routinely with other kinds of copyrighted works.

A strong dissenting view was expressed by the novelist John Hersey, one of the members of the CONTU commission, who regarded programs as too mechanical to be protected by copyright law. Hersey warned that the software industry had no intention to cease the use of trade secrecy for software. Dual assertion of trade secrecy and copyright seemed to him incompatible with copyright's historical function of promoting the dissemination of knowledge.

Another development during this period was that the Copyright Office dropped its earlier requirement that the full text of source code be deposited with it. Now only the first and last 25 pages of source code had to be deposited to register a program. The office also decided it had no objection if the copyright owner blacked out some portions of the deposited source code so as not to reveal trade secrets. This new policy was said to be consistent with the new copyright statute that protected both published and unpublished works alike, in contrast to the prior statutes that had protected mainly published works. 15

With the enactment of the software copyright amendments, software developers had a legal remedy in the event that someone began to mass-market exact or near-exact copies of the developers' programs in competition with the owner of the copyright in the program. Unsurprisingly, the first software copyright cases involved exact copying of the whole or substantial portions of program code, and in them, the courts found copyright infringement. Copyright litigation in the mid- and late 1980s began to grapple with questions about what, besides program code, copyright protects about computer programs. Because the "second-generation" litigation affects the current legal framework for the protection of computer programs, the issues raised by these cases will be dealt with in the next section.

As CONTU Commissioner Hersey anticipated, software developers did not give up their claims to the valuable trade secrets embodied in their programs after enactment of the 1980 amendments to the copyright statute.

To protect those secrets, developers began distributing their products in machine-readable form, often relying on "shrink-wrap" licensing agreements to limit consumer rights in the software. 16 Serious questions exist about the enforceability of shrink-wrap licenses, some because of their dubious contractual character 17 and some because of provisions that aim to deprive consumers of rights conferred by the copyright statute. 18 That has not led, however, to their disuse.

One common trade secret-related provision of shrink-wrap licenses, as well as of many negotiated licenses, is a prohibition against decompilation or disassembly of the program code. Such provisions are relied on as the basis of software developer assertions that notwithstanding the mass distribution of a program, the program should be treated as unpublished copyrighted works as to which virtually no fair use defenses can be raised. 19

Those who seek to prevent decompilation of programs tend to assert that since decompilation involves making an unauthorized copy of the program, it constitutes an improper means of obtaining trade secrets in the program. Under this theory, decompilation of program code results in three unlawful acts: copyright infringement (because of the unauthorized copy made during the decompilation process), trade secret misappropriation (because the secret has been obtained by improper means, i.e., by copyright

infringement), and a breach of the licensing agreement (which prohibits decompilation).

Under this theory, copyright law would become the legal instrument by which trade secrecy could be maintained in a mass-marketed product, rather than a law that promotes the dissemination of knowledge. Others regard decompilation as a fair use of a mass-marketed program and, shrink-wrap restrictions to the contrary, as unenforceable. This issue has been litigated in the United States, but has not yet been resolved definitively. 20 The issue remains controversial both within the United States and abroad.

A second important legal development in the early 1980s—although one that took some time to become apparent—was a substantial shift in the U.S. Patent and Trademark Office (PTO) policy concerning the patentability of computer program-related inventions. This change occurred after the 1981 decision by the U.S. Supreme Court in Diamond v. Diehr, which ruled that a rubber curing process, one element of which was a computer program, was a patentable process. On its face, the Diehr decision seemed consistent with the 1966 Patent Office policy and seemed, therefore, not likely to lead to a significant change in patent policy regarding software innovations. 21 By the mid-1980s, however, the PTO had come to construe the Court's ruling broadly and started issuing a wide variety of computer program-related patents. Only "mathematical algorithms in the abstract" were now thought unpatentable. Word of the PTO's new receptivity to software patent applications spread within the patent bar and gradually to software developers.

During the early and mid-1980s, both the computer science field and the software industry grew very significantly. Innovative ideas in computer science and related research fields were widely published and disseminated. Software was still exchanged by researchers, but a new sensitivity to intellectual property rights began to arise, with general recognition that unauthorized copying of software might infringe copyrights, especially if done with a commercial purpose. This was not perceived as presenting a serious obstacle to research, for it was generally understood that a reimplementation of the program (writing one's own code) would be

noninfringing. 22 Also, much of the software (and ideas about software) exchanged by researchers during the early and mid-1980s occurred outside the commercial marketplace. Increasingly, the exchanges took place with the aid of government-subsidized networks of computers.

Software firms often benefited from the plentiful availability of research about software, as well as from the availability of highly trained researchers who could be recruited as employees. Software developers began investing more heavily in research and development work. Some of the results of this research was published and/or exchanged at technical conferences, but much was kept as a trade secret and incorporated in new products.

By the late 1980s, concerns began arising in the computer science and related fields, as well as in the software industry and the legal community, about the degree of intellectual property protection needed to promote a continuation of the high level of innovation in the software industry. 23 Although most software development firms, researchers, and manufacturers of computers designed to be compatible with the leading firms' machines seemed to think that copyright (complemented by trade secrecy) was adequate to their needs, the changing self-perception of several major computer manufacturers led them to push for more and "stronger" protection. (This concern has been shared by some successful software firms whose most popular programs were being "cloned" by competitors.) Having come to realize that software was where the principal money of the future would be made, these computer firms began reconceiving themselves as software developers. As they did so, their perspective on software protection issues changed as well. If they were going to invest in software development, they wanted "strong'' protection for it. They have, as a consequence, become among the most vocal advocates of strong copyright, as well as of patent protection for computer programs. 24

CURRENT LEGAL APPROACHES IN THE UNITED STATES

Software developers in the United States are currently protecting software products through one or more of the following legal protection mechanisms: copyright, trade secret, and/or patent law. Licensing agreements often supplement these forms of protection. Some software licensing agreements are negotiated with individual customers; others are printed forms found under the plastic shrink-wrap of a mass-marketed package. 25 Few developers rely on only one form of legal protection. Developers seem to differ somewhat on the mix of legal protection mechanisms they employ as well as on the degree of protection they expect from each legal device.

Although the availability of intellectual property protection has unquestionably contributed to the growth and prosperity of the U.S. software industry, some in the industry and in the research community are concerned that innovation and competition in this industry will be impeded rather than enhanced if existing intellectual property rights are construed very broadly. 26 Others, however, worry that courts may not construe intellectual property rights broadly enough to protect what is most valuable about software, and if too little protection is available, there may be insufficient incentives to invest in software development; hence innovation and competition may be retarded through underprotection. 27 Still others (mainly lawyers) are confident that the software industry will continue to prosper and grow under the existing intellectual property regimes as the courts "fill out" the details of software protection on a case-by-case basis as they have been doing for the past several years. 28

What's Not Controversial

Although the main purpose of the discussion of current approaches is to give an overview of the principal intellectual property issues about which there is controversy in the technical and legal communities, it may be wise to begin with a recognition of a number of intellectual property issues as to which there is today no significant controversy. Describing only the aspects of the legal environment as to which controversies exist would risk creating a misimpression about the satisfaction many software developers and lawyers have with some aspects of intellectual property rights they now use to protect their and their clients' products.

One uncontroversial aspect of the current legal environment is the use of copyright to protect against exact or near-exact copying of program code. Another is the use of copyright to protect certain aspects of user interfaces, such as videogame graphics, that are easily identifiable as "expressive" in a traditional copyright sense. Also relatively uncontroversial is the use of copyright protection for low-level structural details of programs, such as the instruction-by-instruction sequence of the code. 29

The use of trade secret protection for the source code of programs and other internally held documents concerning program design and the like is similarly uncontroversial. So too is the use of licensing agreements negotiated with individual customers under which trade secret software is made available to licensees when the number of licensees is relatively small and when there is a reasonable prospect of ensuring that licensees will take adequate measures to protect the secrecy of the software. Patent protection for industrial processes that have computer program elements, such as the rubber curing process in the Diehr case, is also uncontroversial.

Substantial controversies exist, however, about the application of copyright law to protect other aspects of software, about patent protection for other kinds of software innovations, about the enforceability of shrink-wrap licensing agreements, and about the manner in which the various forms of legal protection seemingly available to software developers interrelate in the protection of program elements (e.g., the extent to which copyright and trade secret protection can coexist in mass-marketed software).

Controversies Arising From Whelan v. Jaslow

Because quite a number of the most contentious copyright issues arise from the Whelan v. Jaslow decision, this subsection focuses on that case. In the summer of 1986, the Third Circuit Court of Appeals affirmed a trial court decision in favor of Whelan Associates in its software copyright lawsuit against Jaslow Dental Laboratories. 30 Jaslow's program for managing dental lab business functions used some of the same data and file structures as Whelan's program (to which Jaslow had access), and five subroutines of Jaslow's program functioned very similarly to Whelan's. The trial court inferred that there were substantial similarities in the underlying structure of the two programs based largely on a comparison of similarities in the user interfaces of the two programs, even though user interface similarities were not the basis for the infringement claim. Jaslow's principal defense was that Whelan's copyright protected only against exact copying of program code, and since there were no literal similarities between the programs, no copyright infringement had occurred.

In its opinion on this appeal, the Third Circuit stated that copyright protection was available for the "structure, sequence, and organization" (sso) of a program, not just the program code. (The court did not distinguish between high- and low-level structural features of a program.) The court analogized copyright protection for program sso to the copyright protection available for such things as detailed plot sequences in novels. The court also emphasized that the coding of a program was a minor part of the cost of development of a program. The court expressed fear that if copyright protection was not accorded to sso, there would be insufficient incentives to invest in the development of software.

The Third Circuit's Whelan decision also quoted with approval from that part of the trial court opinion stating that similarities in the manner in which programs functioned could serve as a basis for a finding of copyright infringement. Although recognizing that user interface similarities did not necessarily mean that two programs had similar underlying structures (thereby correcting an error the trial judge had made), the appellate court thought that user interface similarities might still be some evidence of underlying structural similarities. In conjunction with other evidence in the case, the Third Circuit decided that infringement had properly been found.

Although a number of controversies have arisen out of the Whelan opinion, the aspect of the opinion that has received the greatest attention is the test the court used for determining copyright infringement in computer

program cases. The " Whelan test" regards the general purpose or function of a program as its unprotectable "idea." All else about the program is, under the Whelan test, protectable "expression'' unless there is only one or a very small number of ways to achieve the function (in which case idea and expression are said to be "merged," and what would otherwise be expression is treated as an idea). The sole defense this test contemplates for one who has copied anything more detailed than the general function of another program is that copying that detail was "necessary" to perform that program function. If there is in the marketplace another program that does the function differently, courts applying the Whelan test have generally been persuaded that the copying was unjustified and that what was taken must have been "expressive."

Although the Whelan test has been used in a number of subsequent cases, including the well-publicized Lotus v. Paperback case, 31 some judges have rejected it as inconsistent with copyright law and tradition, or have found ways to distinguish the Whelan case when employing its test would have resulted in a finding of infringement. 32

Many commentators assert that the Whelan test interprets copyright

protection too expansively. 33 Although the court in Whelan did not seem to realize it, the Whelan test would give much broader copyright protection to computer programs than has traditionally been given to novels and plays, which are among the artistic and fanciful works generally accorded a broader scope of protection than functional kinds of writings (of which programs would seem to be an example). 34 The Whelan test would forbid reuse of many things people in the field tend to regard as ideas. 35 Some commentators have suggested that because innovation in software tends to be of a more incremental character than in some other fields, and especially given the long duration of copyright protection, the Whelan interpretation of the scope of copyright is likely to substantially overprotect software. 36

One lawyer-economist, Professor Peter Menell, has observed that the model of innovation used by the economists who did the study of software for CONTU is now considered to be an outmoded approach. 37 Those econo-

mists focused on a model that considered what incentives would be needed for development of individual programs in isolation. Today, economists would consider what protection would be needed to foster innovation of a more cumulative and incremental kind, such as has largely typified the software field. In addition, the economists on whose work CONTU relied did not anticipate the networking potential of software and consequently did not study what provisions the law should make in response to this phenomenon. Menell has suggested that with the aid of their now more refined model of innovation, economists today might make somewhat different recommendations on software protection than they did in the late 1970s for CONTU. 38

As a matter of copyright law, the principal problem with the Whelan test is its incompatibility with the copyright statute, the case law properly interpreting it, and traditional principles of copyright law. The copyright statute provides that not only ideas, but also processes, procedures, systems, and methods of operation, are unprotectable elements of copyrighted works. 39 This provision codifies some long-standing principles derived from U.S. copyright case law, such as the Supreme Court's century-old Baker v. Selden decision that ruled that a second author did not infringe a first author's copyright when he put into his own book substantially similar ledger sheets to those in the first author's book. The reason the Court gave for its ruling was that Selden's copyright did not give him exclusive rights to the bookkeeping system, but only to his explanation or description of it. 40 The ordering and arrangement of columns and headings on the ledger sheets were part of the system; to get exclusive rights in this, the Court said that Selden would have to get a patent.

The statutory exclusion from copyright protection for methods, processes, and the like was added to the copyright statute in part to ensure that the scope of copyright in computer programs would not be construed too broadly. Yet, in cases in which the Whelan test has been employed, the courts have tended to find the presence of protectable "expression" when they perceive there to be more than a couple of ways to perform some function, seeming not to realize that there may be more than one "method" or "system" or "process" for doing something, none of which is properly protected by copyright law. The Whelan test does not attempt to exclude

methods or processes from the scope of copyright protection, and its recognition of functionality as a limitation on the scope of copyright is triggered only when there are no alternative ways to perform program functions.

Whelan has been invoked by plaintiffs not only in cases involving similarities in the internal structural design features of programs, but also in many other kinds of cases. sso can be construed to include internal interface specifications of a program, the layout of elements in a user interface, and the sequence of screen displays when program functions are executed, among other things. Even the manner in which a program functions can be said to be protectable by copyright law under Whelan . The case law on these issues and other software issues is in conflict, and resolution of these controversies cannot be expected very soon.

Traditionalist Versus Strong Protectionist View of What Copyright Law Does and Does Not Protect in Computer Programs

Traditional principles of copyright law, when applied to computer programs, would tend to yield only a "thin" scope of protection for them. Unquestionably, copyright protection would exist for the code of the program and the kinds of expressive displays generated when program instructions are executed, such as explanatory text and fanciful graphics, which are readily perceptible as traditional subject matters of copyright law. A traditionalist would regard copyright protection as not extending to functional elements of a program, whether at a high or low level of abstraction, or to the functional behavior that programs exhibit. Nor would copyright protection be available for the applied know-how embodied in programs, including program logic. 41 Copyright protection would also not be available for algorithms or other structural abstractions in software that are constituent elements of a process, method, or system embodied in a program.

Efficient ways of implementing a function would also not be protectable by copyright law under the traditionalist view, nor would aspects of software design that make the software easier to use (because this bears on program functionality). The traditionalist would also not regard making a limited number of copies of a program to study it and extract interface information or other ideas from the program as infringing conduct, because computer programs are a kind of work for which it is necessary to make a copy to "read" the text of the work. 42 Developing a program that incorporates interface information derived from decompilation would also, in the traditionalist view, be noninfringing conduct.

If decompilation and the use of interface information derived from the study of decompiled code were to be infringing acts, the traditionalist would regard copyright as having been turned inside out, for instead of promoting the dissemination of knowledge as has been its traditional purpose, copyright law would become the principal means by which trade secrets would be maintained in widely distributed copyrighted works. Instead of protecting only expressive elements of programs, copyright would become like a patent: a means by which to get exclusive rights to the configuration of a machine—without meeting stringent patent standards or following the strict procedures required to obtain patent protection. This too would seem to turn copyright inside out.

Because interfaces, algorithms, logic, and functionalities of programs are aspects of programs that make them valuable, it is understandable that some of those who seek to maximize their financial returns on software investments have argued that "strong" copyright protection is or should be available for all valuable features of programs, either as part of program sso or under the Whelan "there's-another-way-to-do-it" test. 43 Congress seems to have intended for copyright law to be interpreted as to programs on a case-by-case basis, and if courts determine that valuable features should be considered "expressive," the strong protectionists would applaud this common law evolution. If traditional concepts of copyright law and its purposes do not provide an adequate degree of protection for software innovation, they see it as natural that copyright should grow to provide it. Strong protectionists tend to regard traditionalists as sentimental Luddites who do not appreciate that what matters is for software to get the degree of protection it needs from the law so that the industry will thrive.

Although some cases, most notably the Whelan and Lotus decisions, have adopted the strong protectionist view, traditionalists will tend to regard these decisions as flawed and unlikely to be affirmed in the long run because they are inconsistent with the expressed legislative intent to have traditional principles of copyright law applied to software. Some copyright traditionalists favor patent protection for software innovations on the ground that the valuable functional elements of programs do need protection to create proper incentives for investing in software innovations, but that this protection should come from patent law, not from copyright law.

Controversy Over "Software Patents"

Although some perceive patents as a way to protect valuable aspects of programs that cannot be protected by copyright law, those who argue for patents for software innovations do not rely on the "gap-filling" concern alone. As a legal matter, proponents of software patents point out that the patent statute makes new, nonobvious, and useful "processes" patentable. Programs themselves are processes; they also embody processes. 44 Computer hardware is clearly patentable, and it is a commonplace in the computing field that any tasks for which a program can be written can also be implemented in hardware. This too would seem to support the patentability of software.

Proponents also argue that protecting program innovations by patent law is consistent with the constitutional purpose of patent law, which is to promote progress in the "useful arts." Computer program innovations are technological in nature, which is said to make them part of the useful arts to which the Constitution refers. Proponents insist that patent law has the same potential for promoting progress in the software field as it has had for promoting progress in other technological fields. They regard attacks on patents for software innovations as reflective of the passing of the frontier in the software industry, a painful transition period for some, but one necessary if the industry is to have sufficient incentives to invest in software development.

Some within the software industry and the technical community, however, oppose patents for software innovations. 45 Opponents tend to make two kinds of arguments against software patents, often without distinguishing between them. One set of arguments questions the ability of the PTO to deal well with software patent applications. Another set raises more fundamental questions about software patents. Even assuming that the PTO could begin to do a good job at issuing software patents, some question whether

innovation in the software field will be properly promoted if patents become widely available for software innovations. The main points of both sets of arguments are developed below.

Much of the discussion in the technical community has focused on "bad" software patents that have been issued by the PTO. Some patents are considered bad because the innovation was, unbeknownst to the PTO, already in the state of the art prior to the date of invention claimed in the patent. Others are considered bad because critics assert that the innovations they embody are too obvious to be deserving of patent protection. Still others are said to be bad because they are tantamount to a claim for performing a particular function by computer or to a claim for a law of nature, neither of which is regarded as patentable subject matter. Complaints abound that the PTO, after decades of not keeping up with developments in this field, is so far out of touch with what has been and is happening in the field as to be unable to make appropriate judgments on novelty and nonobviousness issues. Other complaints relate to the office's inadequate classification scheme for software and lack of examiners with suitable education and experience in computer science and related fields to make appropriate judgments on software patent issues. 46

A somewhat different point is made by those who assert that the software industry has grown to its current size and prosperity without the aid of patents, which causes them to question the need for patents to promote innovation in this industry. 47 The highly exclusionary nature of patents (any use of the innovation without the patentee's permission is infringing) contrasts sharply with the tradition of independent reinvention in this field. The high expense associated with obtaining and enforcing patents raises concerns about the increased barriers to entry that may be created by the patenting of software innovations. Since much of the innovation in this industry has come from small firms, policies that inhibit entry by small firms may not promote innovation in this field in the long run. Similar questions arise as to whether patents will promote a proper degree of innovation in an incremental industry such as the software industry. It would be possible to undertake an economic study of conditions that have promoted and are promoting progress in the software industry to serve as a basis for a policy decision on software patents, but this has not been done to date.

Some computer scientists and mathematicians are also concerned about patents that have been issuing for algorithms, 48 which they regard as dis-

coveries of fundamental truths that should not be owned by anyone. Because any use of a patented algorithm within the scope of the claims—whether by an academic or a commercial programmer, whether one knew of the patent or not—may be an infringement, some worry that research on algorithms will be slowed down by the issuance of algorithm patents. One mathematical society has recently issued a report opposing the patenting of algorithms. 49 Others, including Richard Stallman, have formed a League for Programming Freedom.

There is substantial case law to support the software patent opponent position, notwithstanding the PTO change in policy. 50 Three U.S. Supreme Court decisions have stated that computer program algorithms are unpatentable subject matter. Other case law affirms the unpatentability of processes that involve the manipulation of information rather than the transformation of matter from one physical state to another.

One other concern worth mentioning if both patents and copyrights are used to protect computer program innovations is whether a meaningful boundary line can be drawn between the patent and copyright domains as regards software. 51 A joint report of the U.S. PTO and the Copyright Office optimistically concludes that no significant problems will arise from the coexistence of these two forms of protection for software because copyright law will only protect program "expression" whereas patent law will only protect program "processes." 52

Notwithstanding this report, I continue to be concerned with the patent/ copyright interface because of the expansive interpretations some cases, particularly Whelan, have given to the scope of copyright protection for programs. This prefigures a significant overlap of copyright and patent law as to software innovations. This overlap would undermine important economic and public policy goals of the patent system, which generally leaves in the public domain those innovations not novel or nonobvious enough to be patented. Mere "originality" in a copyright sense is not enough to make an innovation in the useful arts protectable under U.S. law. 53

A concrete example may help illustrate this concern. Some patent lawyers report getting patents on data structures for computer programs.

The Whelan decision relied in part on similarities in data structures to prove copyright infringement. Are data structures "expressive" or "useful"? When one wants to protect a data structure of a program by copyright, does one merely call it part of the sso of the program, whereas if one wants to patent it, one calls it a method (i.e., a process) of organizing data for accomplishing certain results? What if anything does copyright's exclusion from protection of processes embodied in copyrighted works mean as applied to data structures? No clear answer to these questions emerges from the case law.

Nature of Computer Programs and Exploration of a Modified Copyright Approach

It may be that the deeper problem is that computer programs, by their very nature, challenge or contradict some fundamental assumptions of the existing intellectual property regimes. Underlying the existing regimes of copyright and patent law are some deeply embedded assumptions about the very different nature of two kinds of innovations that are thought to need very different kinds of protection owing to some important differences in the economic consequences of their protection. 54

In the United States, these assumptions derive largely from the U.S. Constitution, which specifically empowers Congress "to promote the progress of science [i.e., knowledge] and useful arts [i.e., technology], by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries." 55 This clause has historically been parsed as two separate clauses packaged together for convenience: one giving Congress power to enact laws aimed at promoting the progress of knowledge by giving authors exclusive rights in their writings, and the other giving Congress power to promote technological progress by giving inventors exclusive rights in their technological discoveries. Copyright law implements the first power, and patent law the second.

Owing partly to the distinctions between writings and machines, which the constitutional clause itself set up, copyright law has excluded machines

and other technological subject matters from its domain. 56 Even when described in a copyrighted book, an innovation in the useful arts was considered beyond the scope of copyright protection. The Supreme Court's Baker v. Selden decision reflects this view of the constitutional allocation. Similarly, patent law has historically excluded printed matter (i.e., the contents of writings) from its domain, notwithstanding the fact that printed matter may be a product of a manufacturing process. 57 Also excluded from the patent domain have been methods of organizing, displaying, and manipulating information (i.e., processes that might be embodied in writings, for example mathematical formulas), notwithstanding the fact that "processes" are named in the statute as patentable subject matter. They were not, however, perceived to be "in the useful arts" within the meaning of the constitutional clause.

The constitutional clause has been understood as both a grant of power and a limitation on power. Congress cannot, for example, grant perpetual patent rights to inventors, for that would violate the "limited times" provision of the Constitution. Courts have also sometimes ruled that Congress cannot, under this clause, grant exclusive rights to anyone but authors and inventors. In the late nineteenth century, the Supreme Court struck down the first federal trademark statute on the ground that Congress did not have power to grant rights under this clause to owners of trademarks who were neither "authors" nor "inventors." 58 A similar view was expressed in last year's Feist Publications v. Rural Telephone Services decision by the Supreme Court, which repeatedly stated that Congress could not constitutionally protect the white pages of telephone books through copyright law because to be an "author" within the meaning of the Constitution required some creativity in expression that white pages lacked. 59

Still other Supreme Court decisions have suggested that Congress could not constitutionally grant exclusive rights to innovators in the useful arts who were not true "inventors." 60 Certain economic assumptions are connected with this view, including the assumption that more modest innovations in the useful arts (the work of a mere mechanic) will be forthcoming without the grant of the exclusive rights of a patent, but that the incentives of patent rights are necessary to make people invest in making significant technological advances and share the results of their work with the public instead of keeping them secret.

One reason the United States does not have a copyright-like form of protection for industrial designs, as do many other countries, is because of lingering questions about the constitutionality of such legislation. In addition, concerns exist that the economic consequences of protecting uninventive technological advances will be harmful. So powerful are the prevailing patent and copyright paradigms that when Congress was in the process of considering the adoption of a copyright-like form of intellectual property protection for semiconductor chip designs, there was considerable debate about whether Congress had constitutional power to enact such a law. It finally decided it did have such power under the commerce clause, but even then was not certain.

As this discussion reveals, the U.S. intellectual property law has long assumed that something is either a writing (in which case it is protectable, if at all, by copyright law) or a machine (in which case it is protectable, if at all, by patent law), but cannot be both at the same time. However, as Professor Randall Davis has so concisely said, software is "a machine whose medium of construction happens to be text." 61 Davis regards the act of creating computer programs as inevitably one of both authorship and invention. There may be little or nothing about a computer program that is not, at base, functional in nature, and nothing about it that does not have roots in the text. Because of this, it will inevitably be difficult to draw meaningful boundaries for patents and copyrights as applied to computer programs.

Another aspect of computer programs that challenges the assumptions of existing intellectual property systems is reflected in another of Professor Davis's observations, namely, that "programs are not only texts; they also behave." 62 Much of the dynamic behavior of computer programs is highly functional in nature. If one followed traditional copyright principles, this functional behavior—no matter how valuable it might be—would be considered outside the scope of copyright law. 63 Although the functionality of program behavior might seem at first glance to mean that patent protection would be the obvious form of legal protection for it, as a practical matter, drafting patent claims that would adequately capture program behavior as an invention is infeasible. There are at least two reasons for this: it is partly because programs are able to exhibit such a large number and variety of states that claims could not reasonably cover them, and partly because of

the ''gestalt"-like character of program behavior, something that makes a more copyright-like approach desirable.

Some legal scholars have argued that because of their hybrid character as both writings and machines, computer programs need a somewhat different legal treatment than either traditional patent or copyright law would provide. 64 They have warned of distortions in the existing legal systems likely to occur if one attempts to integrate such a hybrid into the traditional systems as if it were no different from the traditional subject matters of these systems. 65 Even if the copyright and patent laws could be made to perform their tasks with greater predictability than is currently the case, these authors warn that such regimes may not provide the kind of protection that software innovators really need, for most computer programs will be legally obvious for patent purposes, and programs are, over time, likely to be assimilated within copyright in a manner similar to that given to "factual" and "functional" literary works that have only "thin" protection against piracy. 66

Professor Reichman has reported on the recurrent oscillations between states of under- and overprotection when legal systems have tried to cope with another kind of legal hybrid, namely, industrial designs (sometimes referred to as "industrial art"). Much the same pattern seems to be emerging in regard to computer programs, which are, in effect, "industrial literature." 67

The larger problems these hybrids present is that of protecting valuable forms of applied know-how embodied in incremental innovation that cannot successfully be maintained as trade secrets:

[M]uch of today's most advanced technology enjoys a less favorable competitive position than that of conventional machinery because the unpatentable, intangible know-how responsible for its commercial value becomes embodied in products that are distributed on the open market. A product of the new technologies, such as a computer program, an integrated circuit

design, or even a biogenetically altered organism may thus bear its know-how on its face, a condition that renders it as vulnerable to rapid appropriation by second-comers as any published literary or artistic work.

From this perspective, a major problem with the kinds of innovative know-how underlying important new technologies is that they do not lend themselves to secrecy even when they represent the fruit of enormous investment in research and development. Because third parties can rapidly duplicate the embodied information and offer virtually the same products at lower prices than those of the originators, there is no secure interval of lead time in which to recuperate the originators' initial investment or their losses from unsuccessful essays, not to mention the goal of turning a profit. 68

From a behavioral standpoint, investors in applied scientific know-how find the copyright paradigm attractive because of its inherent disposition to supply artificial lead time to all comers without regard to innovative merit and without requiring originators to preselect the products that are most worthy of protection. 69

Full copyright protection, however, with its broad notion of equivalents geared to derivative expressions of an author's personality is likely to disrupt the workings of the competitive market for industrial products. For this and other reasons, Professor Reichman argues that a modified copyright approach to the protection of computer programs (and other legal hybrids) would be a preferable framework for protecting the applied know-how they embody than either the patent or the copyright regime would presently provide. Similar arguments can be made for a modified form of copyright protection for the dynamic behavior of programs. A modified copyright approach might involve a short duration of protection for original valuable functional components of programs. It could be framed to supplement full copyright protection for program code and traditionally expressive elements of text and graphics displayed when programs execute, features of software that do not present the same dangers of competitive disruption from full copyright protection.

The United States is, in large measure, already undergoing the development of a sui generis law for protection of computer software through case-by-case decisions in copyright lawsuits. Devising a modified copyright approach to protecting certain valuable components that are not suitably protected under the current copyright regime would have the advantage of allowing a conception of the software protection problem as a whole, rather than on a piecemeal basis as occurs in case-by-case litigation in which the

skills of certain attorneys and certain facts may end up causing the law to develop in a skewed manner. 70

There are, however, a number of reasons said to weigh against sui generis legislation for software, among them the international consensus that has developed on the use of copyright law to protect software and the trend toward broader use of patents for software innovations. Some also question whether Congress would be able to devise a more appropriate sui generis system for protecting software than that currently provided by copyright. Some are also opposed to sui generis legislation for new technology products such as semiconductor chips and software on the ground that new intellectual property regimes will make intellectual property law more complicated, confusing, and uncertain.

Although there are many today who ardently oppose sui generis legislation for computer programs, these same people may well become among the most ardent proponents of such legislation if the U.S. Supreme Court, for example, construes the scope of copyright protection for programs to be quite thin, and reiterates its rulings in Benson, Flook, and Diehr that patent protection is unavailable for algorithms and other information processes embodied in software.

INTERNATIONAL PERSPECTIVES

After adopting copyright as a form of legal protection for computer programs, the United States campaigned vigorously around the world to persuade other nations to protect computer programs by copyright law as well. These efforts have been largely successful. Although copyright is now an international norm for the protection of computer software, the fine details of what copyright protection for software means, apart from protection against exact copying of program code, remain somewhat unclear in other nations, just as in the United States.

Other industrialized nations have also tended to follow the U.S. lead concerning the protection of computer program-related inventions by patent

law. 71 Some countries that in the early 1960s were receptive to the patenting of software innovations became less receptive after the Gottschalk v. Benson decision by the U.S. Supreme Court. Some even adopted legislation excluding computer programs from patent protection. More recently, these countries are beginning to issue more program-related patents, once again paralleling U.S. experience, although as in the United States, the standards for patentability of program-related inventions are somewhat unclear. 72 If the United States and Japan continue to issue a large number of computer program-related patents, it seems quite likely other nations will follow suit.

There has been strong pressure in recent years to include relatively specific provisions about intellectual property issues (including those affecting computer programs) as part of the international trade issues within the framework of the General Agreement on Tariffs and Trade (GATT). 73 For a time, the United States was a strong supporter of this approach to resolution of disharmonies among nations on intellectual property issues affecting software. The impetus for this seems to have slackened, however, after U.S. negotiators became aware of a lesser degree of consensus among U.S. software developers on certain key issues than they had thought was the case. Since the adoption of its directive on software copyright law, the European Community (EC) has begun pressing for international adoption of its position on a number of important software issues, including its copyright rule on decompilation of program code.

There is a clear need, given the international nature of the market for software, for a substantial international consensus on software protection issues. However, because there are so many hotly contested issues concerning the extent of copyright and the availability of patent protection for computer programs yet to be resolved, it may be premature to include very specific rules on these subjects in the GATT framework.

Prior to the adoption of the 1991 European Directive on the Protection of Computer Programs, there was general acceptance in Europe of copyright as a form of legal protection for computer programs. A number of nations had interpreted existing copyright statutes as covering programs. Others took legislative action to extend copyright protection to software. There was, however, some divergence in approach among the member nations of the EC in the interpretation of copyright law to computer software. 74

France, for example, although protecting programs under its copyright law, put software in the same category as industrial art, a category of work that is generally protected in Europe for 25 years instead of the life plus 50-year term that is the norm for literary and other artistic works. German courts concluded that to satisfy the "originality" standard of its copyright law, the author of a program needed to demonstrate that the program was the result of more than an average programmer's skill, a seemingly patentlike standard. In 'addition, Switzerland (a non-EC member but European nonetheless) nearly adopted an approach that treated both semiconductor chip designs and computer programs under a new copyright-like law.

Because of these differences and because it was apparent that computer programs would become an increasingly important item of commerce in the European Community, the EC undertook in the late 1980s to develop a policy concerning intellectual property protection for computer programs to which member nations should harmonize their laws. There was some support within the EC for creating a new law for the protection of software, but the directorate favoring a copyright approach won this internal struggle over what form of protection was appropriate for software.

In December 1988 the EC issued a draft directive on copyright protection for computer programs. This directive was intended to spell out in considerable detail in what respects member states should have uniform rules on copyright protection for programs. (The European civil law tradition generally prefers specificity in statutory formulations, in contrast with the U.S. common law tradition, which often prefers case-by-case adjudication of disputes as a way to fill in the details of a legal protection scheme.)

The draft directive on computer programs was the subject of intense debate within the European Community, as well as the object of some intense lobbying by major U.S. firms who were concerned about a number of issues, but particularly about what rule would be adopted concerning decompilation of program code and protection of the internal interfaces of

programs. Some U.S. firms, among them IBM Corp., strongly opposed any provision that would allow decompilation of program code and sought to have interfaces protected; other U.S. firms, such as Sun Microsystems, sought a rule that would permit decompilation and would deny protection to internal interfaces. 75

The final EC directive published in 1991 endorses the view that computer programs should be protected under member states' copyright laws as literary works and given at least 50 years of protection against unauthorized copying. 76 It permits decompilation of program code only if and to the extent necessary to obtain information to create an interoperable program. The inclusion in another program of information necessary to achieve interoperability seems, under the final directive, to be lawful.

The final EC directive states that "ideas" and "principles" embodied in programs are not protectable by copyright, but does not provide examples of what these terms might mean. The directive contains no exclusion from protection of such things as processes, procedures, methods of operation, and systems, as the U.S. statute provides. Nor does it clearly exclude protection of algorithms, interfaces, and program logic, as an earlier draft would have done. Rather, the final directive indicates that to the extent algorithms, logic, and interfaces are ideas, they are unprotectable by copyright law. In this regard, the directive seems, quite uncharacteristically for its civil law tradition, to leave much detail about how copyright law will be applied to programs to be resolved by litigation.

Having just finished the process of debating the EC directive about copyright protection of computer programs, intellectual property specialists in the EC have no interest in debating the merits of any sui generis approach to software protection, even though the only issue the EC directive really resolved may have been that of interoperability. Member states will likely have to address another controversial issue—whether or to what extent user interests in standardization of user interfaces should limit the scope of copyright

protection for programs—as they act on yet another EC directive, one that aims to standardize user interfaces of computer programs. Some U.S. firms may perceive this latter directive as an effort to appropriate valuable U.S. product features.

Japan was the first major industrialized nation to consider adoption of a sui generis approach to the protection of computer programs. 77 Its Ministry of International Trade and Industry (MITI) published a proposal that would have given 15 years of protection against unauthorized copying to computer programs that could meet a copyright-like originality standard under a copyright-like registration regime. MITI attempted to justify its proposed different treatment for computer programs as one appropriate to the different character of programs, compared with traditional copyrighted works. 78 The new legal framework was said to respond and be tailored to the special character of programs. American firms, however, viewed the MITI proposal, particularly its compulsory license provisions, as an effort by the Japanese to appropriate the valuable products of the U.S. software industry. Partly as a result of U.S. pressure, the MITI proposal was rejected by the Japanese government, and the alternative copyright proposal made by the ministry with jurisdiction over copyright law was adopted.

Notwithstanding their inclusion in copyright law, computer programs are a special category of protected work under Japanese law. Limiting the scope of copyright protection for programs is a provision indicating that program languages, rules, and algorithms are not protected by copyright law. 79 Japanese case law under this copyright statute has proceeded along lines similar to U.S. case law, with regard to exact and near-exact copying of program code and graphical aspects of videogame programs, 80 but there have been some Japanese court decisions interpreting the exclusion from protection provisions in a manner seemingly at odds with some U.S. Decisions.

The Tokyo High Court, for example, has opined that the processing flow of a program (an aspect of a program said to be protectable by U.S. law in the Whelan case) is an algorithm within the meaning of the copyright limitation provision. 81 Another seems to bear out Professor Karjala's prediction that Japanese courts would interpret the programming language limitation to permit firms to make compatible software. 82 There is one Japanese decision that can be read to prohibit reverse engineering of program code, but because this case involved not only disassembly of program code but also distribution of a clearly infringing program, the legality of intermediate copying to discern such things as interface information is unclear in Japan. 83

Other Nations

The United States has been pressing a number of nations to give "proper respect" to U.S. intellectual property products, including computer programs. In some cases, as in its dealings with the People's Republic of China, the United States has been pressing for new legislation to protect software under copyright law. In some cases, as in its dealings with Thailand, the United States has been pressing for more vigorous enforcement of intellectual property laws as they affect U.S. intellectual property products. In other cases, as in its dealings with Brazil, the United States pressed for repeal of sui generis legislation that disadvantaged U.S. software producers, compared with Brazilian developers. The United States has achieved some success in these efforts. Despite these successes, piracy of U.S.-produced software and other intellectual property products remains a substantial source of concern.

FUTURE CHALLENGES

Many of the challenges posed by use of existing intellectual property laws to protect computer programs have been discussed in previous sections. This may, however, only map the landscape of legal issues of widespread concern today. Below are some suggestions about issues as to which computer programs may present legal difficulties in the future.

Advanced Software Systems

It has thus far been exceedingly difficult for the legal system to resolve even relatively simple disputes about software intellectual property rights, such as those involved in the Lotus v. Paperback Software case. This does not bode well for how the courts are likely to deal with more complex problems presented by more complex software in future cases. The difficulties arise partly from the lack of familiarity of judges with the technical nature of computers and software, and partly from the lack of close analogies within the body of copyright precedents from which resolutions of software issues might be drawn. The more complex the software, the greater is the likelihood that specially trained judges will be needed to resolve intellectual property disputes about the software. Some advanced software systems are also likely to be sufficiently different from traditional kinds of copyrighted works that the analogical distance between the precedents and a software innovation may make it difficult to predict how copyright law should be applied to it. What copyright protection should be available, for example, to a user interface that responds to verbal commands, gestures, or movements of eyeballs?

Digital Media

The digital medium itself may require adaptation of the models underlying existing intellectual property systems. 84 Copyright law is built largely on the assumption that authors and publishers can control the manufacture and distribution of copies of protected works emanating from a central source. The ease with which digital works can be copied, redistributed, and used by multiple users, as well as the compactness and relative invisibility of works in digital form, have already created substantial incentives for developers of digital media products to focus their commercialization efforts on controlling the uses of digital works, rather than on the distribution of copies, as has more commonly been the rule in copyright industries.

Rules designed for controlling the production and distribution of copies may be difficult to adapt to a system in which uses need to be controlled. Some digital library and hypertext publishing systems seem to be designed to bypass copyright law (and its public policy safeguards, such as the fair use rule) and establish norms of use through restrictive access licensing

agreements. 85 Whether the law will eventually be used to regulate conditions imposed on access to these systems, as it has regulated access to such communication media as broadcasting, remains to be seen. However, the increasing convergence of intellectual property policy, broadcast and telecommunications policy, and other aspects of information policy seems inevitable.

There are already millions of people connected to networks of computers, who are thereby enabled to communicate with one another with relative ease, speed, and reliability. Plans are afoot to add millions more and to allow a wide variety of information services to those connected to the networks, some of which are commercial and some of which are noncommercial in nature. Because networks of this type and scope are a new phenomenon, it would seem quite likely that some new intellectual property issues will arise as the use of computer networks expands. The more commercial the uses of the networks, the more likely intellectual property disputes are to occur.

More of the content distributed over computer networks is copyrighted than its distributors seem to realize, but even as to content that has been recognized as copyrighted, there is a widespread belief among those who communicate over the net that at least noncommercial distributions of content—no matter the number of recipients—are "fair uses" of the content. Some lawyers would agree with this; others would not. Those responsible for the maintenance of the network may need to be concerned about potential liability until this issue is resolved.

A different set of problems may arise when commercial uses are made of content distributed over the net. Here the most likely disputes are those concerning how broad a scope of derivative work rights copyright owners should have. Some owners of copyrights can be expected to resist allowing anyone but themselves (or those licensed by them) to derive any financial benefit from creating a product or service that is built upon the value of their underlying work. Yet value-added services may be highly desirable to consumers, and the ability of outsiders to offer these products and services may spur beneficial competition. At the moment, the case law generally regards a copyright owner's derivative work right as infringed only if a recognizable block of expression is incorporated into another work. 86 How-

ever, the ability of software developers to provide value-added products and services that derive value from the underlying work without copying expression from it may lead some copyright owners to seek to extend the scope of derivative work rights.

Patents and Information Infrastructure of the Future

If patents are issued for all manner of software innovations, they are likely to play an important role in the development of the information infrastructure of the future. Patents have already been issued for hypertext navigation systems, for such things as latent semantic indexing algorithms, and for other software innovations that might be used in the construction of a new information infrastructure. Although it is easy to develop a list of the possible pros and cons of patent protection in this domain, as in the more general debate about software patents, it is worth noting that patents have not played a significant role in the information infrastructure of the past or of the present. How patents would affect the development of the new information infrastructure has not been given the study this subject may deserve.

Conflicts Between Information Haves and Have-Nots on an International Scale

When the United States was a developing nation and a net importer of intellectual property products, it did not respect copyright interests of any authors but its own. Charles Dickens may have made some money from the U.S. tours at which he spoke at public meetings, but he never made a dime from the publication of his works in the United States. Now that the United States is a developed nation and a net exporter of intellectual property products, its perspective on the rights of developing nations to determine for themselves what intellectual property rights to accord to the products of firms of the United States and other developed nations has changed. Given the greater importance nowadays of intellectual property products, both to the United States and to the world economy, it is foreseeable that there will be many occasions on which developed and developing nations will have disagreements on intellectual property issues.

The United States will face a considerable challenge in persuading other nations to subscribe to the same detailed rules that it has for dealing with intellectual property issues affecting computer programs. It may be easier for the United States to deter outright ''piracy" (unauthorized copying of the whole or substantially the whole of copyrighted works) of U.S. intellectual property products than to convince other nations that they must adopt the same rules as the United States has for protecting software.

It is also well for U.S. policymakers and U.S. firms to contemplate the possibility that U.S. firms may not always have the leading position in the world market for software products that they enjoy today. When pushing for very "strong" intellectual property protection for software today in the expectation that this will help to preserve the U.S. advantage in the world market, U.S. policymakers should be careful not to push for adoption of rules today that may substantially disadvantage them in the world market of the future if, for reasons not foreseen today, the United States loses the lead it currently enjoys in the software market.

As technological developments multiply around the globe—even as the patenting of human genes comes under serious discussion—nations, companies, and researchers find themselves in conflict over intellectual property rights (IPRs). Now, an international group of experts presents the first multidisciplinary look at IPRs in an age of explosive growth in science and technology.

This thought-provoking volume offers an update on current international IPR negotiations and includes case studies on software, computer chips, optoelectronics, and biotechnology—areas characterized by high development cost and easy reproducibility. The volume covers these and other issues:

  • Modern economic theory as a basis for approaching international IPRs.
  • U.S. intellectual property practices versus those in Japan, India, the European Community, and the developing and newly industrializing countries.
  • Trends in science and technology and how they affect IPRs.
  • Pros and cons of a uniform international IPRs regime versus a system reflecting national differences.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Advertisement

Advertisement

Guidelines for conducting and reporting case study research in software engineering

  • Open access
  • Published: 19 December 2008
  • Volume 14 , pages 131–164, ( 2009 )

Cite this article

You have full access to this open access article

  • Per Runeson 1 &
  • Martin Höst 1  

153k Accesses

2220 Citations

20 Altmetric

Explore all metrics

Case study is a suitable research methodology for software engineering research since it studies contemporary phenomena in its natural context. However, the understanding of what constitutes a case study varies, and hence the quality of the resulting studies. This paper aims at providing an introduction to case study methodology and guidelines for researchers conducting case studies and readers studying reports of such studies. The content is based on the authors’ own experience from conducting and reading case studies. The terminology and guidelines are compiled from different methodology handbooks in other research domains, in particular social science and information systems, and adapted to the needs in software engineering. We present recommended practices for software engineering case studies as well as empirically derived and evaluated checklists for researchers and readers of case study research.

Similar content being viewed by others

what is a computer case study

Guidelines for Case Survey Research in Software Engineering

Towards a decision-making structure for selecting a research design in empirical software engineering.

Claes Wohlin & Aybüke Aurum

The Design Science Paradigm as a Frame for Empirical Software Engineering

Avoid common mistakes on your manuscript.

1 Introduction

The acceptance of empirical studies in software engineering and their contributions to increasing knowledge is continuously growing. The analytical research paradigm is not sufficient for investigating complex real life issues, involving humans and their interactions with technology. However, the overall share of empirical studies is negligibly small in computer science research; Sjøberg et al. ( 2005 ), found 103 experiments in 5,453 articles Ramesh et al. ( 2004 ) and identified less than 2% experiments with human subjects, and only 0.16% field studies among 628 articles. Further, existing work on empirical research methodology in software engineering has a strong focus on experimental research; the earliest by Moher and Schneider ( 1981 ), Basili et al. ( 1986 ), the first methodology handbook by Wohlin et al. ( 2000 ), and promoted by Tichy ( 1998 ). All have a tendency towards quantitative approaches, although also qualitative approaches are discussed during the later years, e.g. by Seaman ( 1999 ). There exist guidelines for experiments’ conduct (Kitchenham et al. 2002 ; Wohlin et al. 2000 ) and reporting (Jedlitschka and Pfahl 2005 ), measurements (Basili and Weiss 1984 ; Fenton and Pfleeger 1996 ; van Solingen and Berghout 1999 ), and systematic reviews (Kitchenham 2007 ), while only little is written on case studies in software engineering (Höst and Runeson 2007 ; Kitchenham et al. 1995 ; Wohlin et al. 2003 ) and qualitative methods (Dittrich 2007 ; Seaman 1999 ; Sim et al. 2001 ). Recently, a comprehensive view of empirical research issues for software engineering has been presented, edited by Shull et al. ( 2008 ).

The term “case study” appears every now and then in the title of software engineering research papers. However, the presented studies range from very ambitious and well organized studies in the field, to small toy examples that claim to be case studies. Additionally, there are different taxonomies used to classify research. The term case study is used in parallel with terms like field study and observational study, each focusing on a particular aspect of the research methodology. For example, Lethbridge et al. use field studies as the most general term (Lethbridge et al. 2005 ), while Easterbrook et al. ( 2008 ) call case studies one of five “classes of research methods”. Zelkowitz and Wallace propose a terminology that is somewhat different from what is used in other fields, and categorize project monitoring, case study and field study as observational methods (Zelkowitz and Wallace 1998 ). This plethora of terms causes confusion and problems when trying to aggregate multiple empirical studies.

The case study methodology is well suited for many kinds of software engineering research, as the objects of study are contemporary phenomena, which are hard to study in isolation. Case studies do not generate the same results on e.g. causal relationships as controlled experiments do, but they provide deeper understanding of the phenomena under study. As they are different from analytical and controlled empirical studies, case studies have been criticized for being of less value, impossible to generalize from, being biased by researchers etc. This critique can be met by applying proper research methodology practices as well as reconsidering that knowledge is more than statistical significance (Flyvbjerg 2007 ; Lee 1989 ). However, the research community has to learn more about the case study methodology in order to review and judge it properly.

Case study methodology handbooks are superfluously available in e.g. social sciences (Robson 2002 ; Stake 1995 ; Yin 2003 ) which literature also has been used in software engineering. In the field of information systems (IS) research, the case study methodology is also much more mature than in software engineering. For example, Benbasat et al. provide a brief overview of case study research in information systems (Benbasat et al. 1987 ), Lee analyzes case studies from a positivistic perspective (Lee 1989 ) and Klein and Myers do the same from an interpretive perspective (Klein and Myers 1999 ).

It is relevant to raise the question: what is specific for software engineering that motivates specialized research methodology? In addition to the specifics of the examples, the characteristics of software engineering objects of study are different from social science and also to some extent from information systems. The study objects are 1) private corporations or units of public agencies developing software rather than public agencies or private corporations using software systems; 2) project oriented rather than line or function oriented; and 3) the studied work is advanced engineering work conducted by highly educated people rather than routine work. Additionally, the software engineering research community has a pragmatic and result-oriented view on research methodology, rather than a philosophical stand, as noticed by Seaman ( 1999 ).

The purpose of this paper is to provide guidance for the researcher conducting case studies, for reviewers of case study manuscripts and for readers of case study papers. It is synthesized from general methodology handbooks, mainly from the social science field, as well as literature from the information systems field, and adapted to software engineering needs. Existing literature on software engineering case studies is of course included as well. The underlying analysis is done by structuring the information according to a general case study research process (presented in Section 2.4 ). Where different recommendations or terms appear, the ones considered most suited for the software engineering domain are selected, based on the authors’ experience on conducting case studies and reading case study reports. Links to data sources are given by regular references. Specifically, checklists for researchers and readers are derived through a systematic analysis of existing checklists (Höst and Runeson 2007 ), and later evaluated by PhD students as well as by members of the International Software Engineering Research Network and updated accordingly.

This paper does not provide absolute statements for what is considered a “good” case study in software engineering. Rather it focuses on a set of issues that all contribute to the quality of the research. The minimum requirement for each issue must be judged in its context, and will most probably evolve over time. This is similar to the principles by Klein and Myers for IS case studies (Klein and Myers 1999 ), “it is incumbent upon authors, reviewers, and exercise their judgment and discretion in deciding whether, how and which of the principles should be applied”. We do neither assess the current status of case study research in software engineering. This is worth a study on its own, similar to the systematic review on experiments by Sjøberg et al. ( 2005 ). Further, examples are used both to illustrate good practices and lack thereof.

This paper is outlined as follows. We first define a set of terms in the field of empirical research, which we use throughout the paper (Section 2.1 ), set case study research into the context of other research methodologies (Section 2.2 ) and discuss the motivations for software engineering case studies (Section 2.3 ). We define a case study research process (Section 2.4 ) and terminology (Section 2.5 ), which are used for the rest of the paper. Section 3 discusses the design of a case study and planning for data collection. Section 4 describes the process of data collection. In Section 5 issues on data analysis are treated, and reporting is discussed in Section 6 . Section 7 discusses reading and reviewing case study report, and Section 8 summarizes the paper. Checklists for conducting and reading case study research are linked to each step in the case study process, and summarized in the Appendix .

2 Background and Definition of Concepts

2.1 research methodology.

In order to set the scope for the type of empirical studies we address in this paper, we put case studies into the context of other research methodologies and refer to general definitions of the term case study according to Robson ( 2002 ), Yin ( 2003 ) and Benbasat et al. ( 1987 ) respectively.

The three definitions agree on that case study is an empirical method aimed at investigating contemporary phenomena in their context . Robson calls it a research strategy and stresses the use of multiple sources of evidence , Yin denotes it an inquiry and remarks that the boundary between the phenomenon and its context may be unclear , while Benbasat et al. make the definitions somewhat more specific, mentioning information gathering from few entities (people, groups, organizations), and the lack of experimental control .

There are three other major research methodologies which are related to case studies:

Survey, which is the “collection of standardized information from a specific population, or some sample from one, usually, but not necessarily by means of a questionnaire or interview” (Robson 2002 ).

Experiment, or controlled experiment, which is characterized by “measuring the effects of manipulating one variable on another variable” (Robson 2002 ) and that “subjects are assigned to treatments by random.”(Wohlin et al. 2000 ). Quasi-experiments are similar to controlled experiments, except that subjects are not randomly assigned to treatments. Quasi-experiments conducted in an industry setting may have many characteristics in common with case studies.

Action research, with its purpose to “influence or change some aspect of whatever is the focus of the research” (Robson 2002 ), is closely related to case study. More strictly, a case study is purely observational while action research is focused on and involved in the change process. In software process improvement (Dittrich et al. 2008 ; Iversen et al. 2004 ) and technology transfer studies (Gorschek et al. 2006 ), the research method should be characterized as action research. However, when studying the effects of a change, e.g. in pre- and post-event studies, we classify the methodology as case study. In IS, where action research is widely used, there is a discussion on finding the balance between action and research, see e.g. (Avison et al. 2001 ; Baskerville and Wood-Harper 1996 ). For the research part of action research, these guidelines apply as well.

Easterbrook et al. ( 2008 ) also count ethnographic studies among the major research methodologies. We prefer to consider ethnographic studies as a specialized type of case studies with focus on cultural practices (Easterbrook et al. 2008 ) or long duration studies with large amounts of participant-observer data (Klein and Myers 1999 ). Zelkowitz and Wallace define four different “observational methods” in software engineering (Zelkowitz and Wallace 1998 ); project monitoring, case study, assertion and field study . Our guidelines apply to all these, except assertion which is not considered a proper research method. In general, the borderline between the types of study is not always distinct. We prefer to see project monitoring as a part of a case study and field studies as multiple case studies. Robson summarizes his view, which seems functional in software engineering as well: “Many flexible design studies, although not explicitly labeled as such, can be usefully viewed as case studies.” (Robson 2002 ) p 185.

Finally, a case study may contain elements of other research methods, e.g. a survey may be conducted within a case study, literature search often precede a case study and archival analyses may be a part of its data collection. Ethnographic methods, like interviews and observations are mostly used for data collection in case studies.

2.2 Characteristics of Research Methodologies

Different research methodologies serve different purposes; one type of research methodology does not fit all purposes. We distinguish between four types of purposes for research based on Robson’s ( 2002 ) classification:

Exploratory—finding out what is happening, seeking new insights and generating ideas and hypotheses for new research.

Descriptive—portraying a situation or phenomenon.

Explanatory—seeking an explanation of a situation or a problem, mostly but not necessary in the form of a causal relationship. Footnote 1

Improving—trying to improve a certain aspect of the studied phenomenon. Footnote 2

Case study methodology was originally used primarily for exploratory purposes, and some researchers still limit case studies for this purpose, as discussed by Flyvbjerg ( 2007 ). However, it is also used for descriptive purposes, if the generality of the situation or phenomenon is of secondary importance. Case studies may be used for explanatory purposes, e.g. in interrupted time series design (pre- and post-event studies) although the isolation of factors may be a problem. This involves testing of existing theories in confirmatory studies. Finally, as indicated above, case studies in the software engineering discipline often take an improvement approach, similar to action research; see e.g. the QA study (Andersson and Runeson 2007b ).

Klein and Myers define three types of case study depending on the research perspective, positivist, critical and interpretive (Klein and Myers 1999 ). A positivist case study searches evidence for formal propositions, measures variables, tests hypotheses and draws inferences from a sample to a stated population, i.e. is close to the natural science research model (Lee 1989 ) and related to Robson’s explanatory category. A critical case study aims at social critique and at being emancipatory, i.e. identifying different forms of social, cultural and political domination that may hinder human ability. Improving case studies may have a character of being critical. An interpretive case study attempts to understand phenomena through the participants’ interpretation of their context, which is similar to Robson’s exploratory and descriptive types. Software engineering case studies tend to lean towards a positivist perspective, especially for explanatory type studies.

Conducting research on real world issues implies a trade-off between level of control and degree of realism. The realistic situation is often complex and non-deterministic, which hinders the understanding of what is happening, especially for studies with explanatory purposes. On the other hand, increasing the control reduces the degree of realism, sometimes leading to the real influential factors being set outside the scope of the study. Case studies are by definition conducted in real world settings, and thus have a high degree of realism, mostly at the expense of the level of control.

The data collected in an empirical study may be quantitative or qualitative. Quantitative data involves numbers and classes, while qualitative data involves words, descriptions, pictures, diagrams etc. Quantitative data is analyzed using statistics, while qualitative data is analyzed using categorization and sorting. Case studies tend mostly to be based on qualitative data, as these provide a richer and deeper description. However, a combination of qualitative and quantitative data often provides better understanding of the studied phenomenon (Seaman 1999 ), i.e. what is sometimes called “mixed methods” (Robson 2002 ).

The research process may be characterized as fixed or flexible according to Anastas and MacDonald ( 1994 ) and Robson ( 2002 ). In a fixed design process, all parameters are defined at the launch of the study, while in a flexible design process key parameters of the study may be changed during the course of the study. Case studies are typically flexible design studies, while experiments and surveys are fixed design studies. Other literature use the terms quantitative and qualitative design studies, for fixed and flexible design studies respectively. We prefer to adhere to the fixed/flexible terminology since it reduces the risk for confusion that a study with qualitative design may collect both qualitative and quantitative data. Otherwise it may be unclear whether the term qualitative refers to the data or the design of the study,

Triangulation is important to increase the precision of empirical research. Triangulation means taking different angles towards the studied object and thus providing a broader picture. The need for triangulation is obvious when relying primarily on qualitative data, which is broader and richer, but less precise than quantitative data. However, it is relevant also for quantitative data, e.g. to compensate for measurement or modeling errors. Four different types of triangulation may be applied (Stake 1995 ):

Data (source) triangulation—using more than one data source or collecting the same data at different occasions.

Observer triangulation—using more than one observer in the study.

Methodological triangulation—combining different types of data collection methods, e.g. qualitative and quantitative methods.

Theory triangulation—using alternative theories or viewpoints.

Table  1 shows an overview of the primary characteristics of the above discussed research methodologies

Yin adds specifically to the characteristics of a case study that it (Yin 2003 ):

“copes with the technically distinctive situation in which there will be many more variables than data points, and as one result

relies on multiple sources of evidence, with data needing to converge in a triangulating fashion, and as another result

benefits from the prior development of theoretical propositions to guide data collection and analysis.”

Hence, a case study will never provide conclusions with statistical significance. On the contrary, many different kinds of evidence, figures, statements, documents, are linked together to support a strong and relevant conclusion.

Perry et al. define similar criteria for a case study (Perry et al. 2005 ). It is expected that a case study:

“Has research questions set out from the beginning of the study

Data is collected in a planned and consistent manner

Inferences are made from the data to answer the research question

Explores a phenomenon, or produces an explanation, description, or causal analysis of it

Threats to validity are addressed in a systematic way.”

In summary, the key characteristics of a case study are that 1) it is of flexible type, coping with the complex and dynamic characteristics of real world phenomena, like software engineering, 2) its conclusions are based on a clear chain of evidence, whether qualitative or quantitative, collected from multiple sources in a planned and consistent manner, and 3) it adds to existing knowledge by being based on previously established theory, if such exist, or by building theory.

2.3 Why Case Studies in Software Engineering?

Case studies are commonly used in areas like psychology, sociology, political science, social work, business, and community planning (e.g. Yin 2003 ). In these areas case studies are conducted with objectives to increase knowledge about individuals, groups, and organizations, and about social, political, and related phenomena. It is therefore reasonable to compare the area of software engineering to those areas where case study research is common, and to compare the research objectives in software engineering to the objectives of case study research in other areas.

The area of software engineering involves development, operation, and maintenance of software and related artifacts, e.g. (Jedlitschka and Pfahl 2005 ). Research on software engineering is to a large extent aimed at investigating how this development, operation, and maintenance are conducted by software engineers and other stakeholders under different conditions. Software development is carried out by individuals, groups and organizations, and social and political questions are of importance for this development. That is, software engineering is a multidisciplinary area involving areas where case studies normally are conducted. This means that many research questions in software engineering are suitable for case study research.

The definition of case study in Section 2.1 focuses on studying phenomena in their context, especially when the boundary between the phenomenon and its context is unclear. This is particularly true in software engineering. Experimentation in software engineering has clearly shown, e.g. when trying to replicate studies, that there are many factors impacting on the outcome of a software engineering activity (Shull et al. 2002 ). Case studies offer an approach which does not need a strict boundary between the studied object and its environment; perhaps the key to understanding is in the interaction between the two?

2.4 Case Study Research Process

When conducting a case study, there are five major process steps to be walked through:

Case study design: objectives are defined and the case study is planned.

Preparation for data collection: procedures and protocols for data collection are defined.

Collecting evidence: execution with data collection on the studied case.

Analysis of collected data

This process is almost the same for any kind of empirical study; compare e.g. to the processes proposed by Wohlin et al. ( 2000 ) and Kitchenham et al. ( 2002 ). However, as case study methodology is a flexible design strategy, there is a significant amount of iteration over the steps (Andersson and Runeson 2007b ). The data collection and analysis may be conducted incrementally. If insufficient data is collected for the analysis, more data collection may be planned etc. However, there is a limit to the flexibility; the case study should have specific objectives set out from the beginning. If the objectives change, it is a new case study rather than a change to the existing one, though this is a matter of judgment as all other classifications. Eisenhardt adds two steps between 4 and 5 above in her process for building theories from case study research (Eisenhardt 1989 ) a) shaping hypotheses and b) enfolding literature, while the rest except for terminological variations are the same as above.

2.5 Definitions

In this paper, we use the following terminology. The overall objective is a statement of what is expected to be achieved in the case study. Others may use goals, aims or purposes as synonyms or hyponyms for objective. The objective is refined into a set of research questions , which are to be answered through the case study analysis. A case may be based on a software engineering theory . It is beyond the scope of this article to discuss in detail what is meant by a theory. However, Sjøberg et al., describe a framework for theories including constructs of interest, relations between constructs, explanations to the relations, and scope of the theory (Sjøberg et al. 2008 ). With this way of describing theories, software engineering theories include at least one construct from software engineering. A research question may be related to a hypothesis (sometimes called a proposition (Yin 2003 )), i.e. a supposed explanation for an aspect of the phenomenon under study. Hypotheses may alternatively be generated from the case study for further research. The case is referred to as the object of the study (e.g. a project), and it contains one or more units of analysis (e.g. subprojects). Data is collected from the subjects of the study, i.e. those providing the information. Data may be quantitative (numbers, measurements) or qualitative (words, descriptions). A case study protocol defines the detailed procedures for collection and analysis of the raw data, sometimes called field procedures .

The guidelines for conducting case studies presented below are organized according to this process. Section 3 is about setting up goals for the case study and preparing for data collection, Section 4 discusses collection of data, Section 5 discusses data analysis and Section 6 provides some guidelines for reporting.

3 Case Study Design and Planning

3.1 defining a case.

Case study research is of flexible type, as mentioned before. This does not mean planning is unnecessary. On the contrary, good planning for a case study is crucial for its success. There are several issues that need to be planned, such as what methods to use for data collection, what departments of an organization to visit, what documents to read, which persons to interview, how often interviews should be conducted, etc. These plans can be formulated in a case study protocol, see Section 3.2 .

A plan for a case study should at least contain the following elements (Robson 2002 ):

Objective—what to achieve?

The case—what is studied?

Theory—frame of reference

Research questions—what to know?

Methods—how to collect data?

Selection strategy—where to seek data?

The objective of the study may be, for example, exploratory, descriptive, explanatory, or improving. The objective is naturally more generally formulated and less precise than in fixed research designs. The objective is initially more like a focus point which evolves during the study. The research questions state what is needed to know in order to fulfill the objective of the study. Similar to the objective, the research questions evolve during the study and are narrowed to specific research questions during the study iterations (Andersson and Runeson 2007b ).

The case may in general be virtually anything which is a “contemporary phenomenon in its real-life context” (Yin 2003 ). In software engineering, the case may be a software development project, which is the most straightforward choice. It may alternatively be an individual, a group of people, a process, a product, a policy, a role in the organization, an event, a technology, etc. The project, individual, group etc. may also constitute a unit of analysis within a case. In the information systems field, the case may be “individuals, groups…or an entire organization. Alternatively, the unit of analysis may be a specific project or decision”(Benbasat et al. 1987 ). Studies on “toy programs” or similarly are of course excluded due to its lack of real-life context. Yin ( 2003 ) distinguishes between holistic case studies , where the case is studied as a whole, and embedded case studies where multiple units of analysis are studied within a case, see Fig.  1 . Whether to define a study consisting of two cases as holistic or embedded depends on what we define as the context and research goals. In our XP example, two projects are studied in two different companies in two different application domains, both using agile practices (Karlström and Runeson 2006 ). The projects may be considered two units of analysis in an embedded case study if the context is software companies in general and the research goal is to study agile practices. On the contrary, if the context is considered being the specific company or application domain, they have to be seen as two separate holistic cases. Benbasat et al. comment on a specific case study, “Even though this study appeared to be a single-case, embedded unit analysis, it could be considered a multiple-case design, due to the centralized nature of the sites.” (Benbasat et al. 1987 ).

Holistic case study ( left ) and embedded case study ( right )

Using theories to develop the research direction is not well established in the software engineering field, as concluded in a systematic review on the topic (Hannay et al. 2007 ; Shull and Feldman 2008 ). However, defining the frame of reference of the study makes the context of the case study research clear, and helps both those conducting the research and those reviewing the results of it. As theories are underdeveloped in software engineering, the frame of reference may alternatively be expressed in terms of the viewpoint taken in the research and the background of the researchers. Grounded theory case studies naturally have no specified theory (Corbin and Strauss 2008 ).

The principal decisions on methods for data collection are defined at design time for the case study, although detailed decisions on data collection procedures are taken later. Lethbridge et al. ( 2005 ) define three categories of methods: direct (e.g. interviews), indirect (e.g. tool instrumentation) and independent (e.g. documentation analysis). These are further elaborated in Section 4 .

In case studies, the case and the units of analysis should be selected intentionally. This is in contrast to surveys and experiments, where subjects are sampled from a population to which the results are intended to be generalized. The purpose of the selection may be to study a case that is expected to be “typical”, “critical”, “revelatory” or “unique” in some respect (Benbasat et al. 1987 ), and the case is selected accordingly. Flyvbjerg defines four variants of information-oriented case study selections: “extreme/deviant”, “maximum variation”, “critical” and “paradigmatic” (Flyvbjerg 2007 ). In a comparative case study, the units of analysis must be selected to have the variation in properties that the study intends to compare. However, in practice, many cases are selected based on availability (Benbasat et al. 1987 ) as is the case for many experiments (Sjøberg et al. 2005 ).

Case selection is particularly important when replicating case studies. A case study may be literally replicated , i.e. the case is selected to predict similar results, or it is theoretically replicated , i.e. the case is selected to predict contrasting results for predictable reasons (Yin 2003 ).

3.2 Case Study Protocol

The case study protocol is a container for the design decisions on the case study as well as field procedures for its carrying through. The protocol is a continuously changed document that is updated when the plans for the case study are changed.

There are several reasons for keeping an updated version of a case study protocol. Firstly, it serves as a guide when conducting the data collection, and in that way prevents the researcher from missing to collect data that were planned to be collected. Secondly, the processes of formulating the protocol makes the research concrete in the planning phase, which may help the researcher to decide what data sources to use and what questions to ask. Thirdly, other researchers and relevant people may review it in order to give feedback on the plans. Feedback on the protocol from other researchers can, for example, lower the risk of missing relevant data sources, interview questions or roles to include in the research and to assure the relation between research questions and interview questions. Finally, it can serve as a log or diary where all conducted data collection and analysis is recorded together with change decisions based on the flexible nature of the research. This can be an important source of information when the case study later on is reported. In order to keep track of changes during the research project, the protocol should be kept under some form of version control.

Pervan and Maimbo propose an outline of a case study protocol, which is summarized in Table  2 . As the proposal shows, the protocol is quite detailed to support a well structured research approach.

3.3 Ethical Considerations

At design time of a case study, ethical considerations must be made (Singer and Vinson 2002 ). Even though a research study first and foremost is built on trust between the researcher and the case (Amschler Andrews and Pradhan 2001 ), explicit measures must be taken to prevent problems. In software engineering, case studies often include dealing with confidential information in an organization. If it is not clear from the beginning how this kind of information is handled and who is responsible for accepting what information to publish, there may be problems later on. Key ethical factors include:

Informed consent

Review board approval

Confidentiality

Handling of sensitive results

Inducements

Subjects and organizations must explicitly agree to participate in the case study, i.e. give informed consent. In some countries, this is even legally required. It may be tempting for the researcher to collect data e.g. through indirect or independent data collection methods, without asking for consent. However, the ethical standards must be maintained for the long term trust in software engineering research.

Legislation of research ethics differs between countries and continents. In many countries it is mandatory to have the study proposal reviewed and accepted with respect to ethical issues (Seaman 1999 ) by a review board or a similar function at a university. In other countries, there are no such rules. Even if there are no such rules, it is recommended that the case study protocol is reviewed by colleagues to help avoiding pitfalls.

Consent agreements are preferably handled through a form or contract between the researchers and the individual participant, see e.g. Robson ( 2002 ) for an example. In an empirical study conduced by the authors of this paper, the following information were included in this kind of form:

Names of researchers and contact information.

Purpose of empirical study.

Procedures used in the empirical study, i.e. a short description of what the participant should do during the study and what steps the researcher will carry out during these activities.

A text clearly stating that the participation is voluntary, and that collected data will be anonymous.

A list of known risks.

A list of benefits for the participants, in this case for example experience from using a new technique and feedback effectiveness.

A description of how confidentiality will be assured. This includes a description of how collected material will be coded and identified in the study.

Information about approvals from review board.

Date and signatures from participant and researchers.

If the researchers intend to use the data for other, not yet defined purposes, this should be signed separately to allow participants to choose if their contribution is for the current study only, or for possible future studies.

Issues on confidentiality and publication should also be regulated in a contract between the researcher and the studied organization. However, not only can information be sensitive when leaking outside a company. Data collected from and opinions stated by individual employees may be sensitive if presented e.g. to their managers (Singer and Vinson 2002 ). The researchers must have the right to keep their integrity and adhere to agreed procedures in this kind of cases. Companies may not know academic practices for publication and dissemination, and must hence be explicitly informed about those. From a publication point of view, the relevant data to publish is rarely sensitive to the company since data may be made anonymous. However, it is important to remember that it is not always sufficient to remove names of companies or individuals. They may be identified by their characteristics if they are selected from a small set of people or companies.

Results may be sensitive to a company, e.g. by revealing deficiencies in their software engineering practices, or if their product comes out last in a comparison (Amschler Andrews and Pradhan 2001 ). The chance that this may occur must be discussed upfront and made clear to the participants of the case study. In case violations of the law are identified during the case study, these must be reported, even though “whistle-blowers” rarely are rewarded.

The inducements for individuals and organizations to participate in a case study vary, but there are always some kinds of incentives, tangible or intangible. It is preferable to make the inducements explicit, i.e. specify what the incentives are for the participants. Thereby the inducement’s role in threatening the validity of the study may also be analyzed.

Giving feedback to the participants of a study is critical for the long term trust and for the validity of the research. Firstly, transcript of interviews and observations should be sent back to the participants to enable correction of raw data. Secondly, analyses should be presented to them in order to maintain their trust in the research. Participants must not necessarily agree in the outcome of the analysis, but feeding back the analysis results increases the validity of the study.

3.4 Checklist

The checklist items for case study design are shown in Table  3 .

4 Collecting Data

4.1 different data sources.

There are several different sources of information that can be used in a case study. It is important to use several data sources in a case study in order to limit the effects of one interpretation of one single data source. If the same conclusion can be drawn from several sources of information, i.e. triangulation (Section 2.2 ), this conclusion is stronger than a conclusion based a single source. In a case study it is also important to take into account viewpoints of different roles, and to investigate differences, for example, between different projects and products. Commonly, conclusions are drawn by analyzing differences between data sources.

According to Lethbridge et al. ( 2005 ) data collection techniques can be divided into three levels:

First degree: Direct methods means that the researcher is in direct contact with the subjects and collect data in real time. This is the case with, for example interviews, focus groups, Delphi surveys (Dalkey and Helmer 1963 ), and observations with “think aloud protocols”.

Second degree: Indirect methods where the researcher directly collects raw data without actually interacting with the subjects during the data collection. This approach is, for example taken in Software Project Telemetry (Johnson et al. 2005 ) where the usage of software engineering tools is automatically monitored, and observed through video recording.

Third degree: Independent analysis of work artifacts where already available and sometimes compiled data is used. This is for example the case when documents such as requirements specifications and failure reports from an organization are analyzed or when data from organizational databases such as time accounting is analyzed.

First degree methods are mostly more expensive to apply than second or third degree methods, since they require significant effort both from the researcher and the subjects. An advantage of first and second degree methods is that the researcher can to a large extent exactly control what data is collected, how it is collected, in what form the data is collected, which the context is etc. Third degree methods are mostly less expensive, but they do not offer the same control to the researcher; hence the quality of the data is not under control either, neither regarding the original data quality nor its use for the case study purpose. In many cases the researcher must, to some extent, base the details of the data collection on what data is available. For third degree methods it should also be noticed that the data has been collected and recorded for another purpose than that of the research study, contrary to general metrics guidelines (van Solingen and Berghout 1999 ). It is not certain that requirements on data validity and completeness were the same when the data was collected as they are in the research study.

In Sections 4.2 – 4.5 we discuss specific data collection methods, where we have found interviews, observations, archival data and metrics being applicable to software engineering case studies (Benbasat et al. 1987 ; Yin 2003 ).

4.2 Interviews

Data collection through interviews is important in case studies. In interview-based data collection, the researcher asks a series of questions to a set of subjects about the areas of interest in the case study. In most cases one interview is conducted with every single subject, but it is possible to conduct group-interviews. The dialogue between the researcher and the subject(s) is guided by a set of interview questions.

The interview questions are based on the topic of interest in the case study. That is, the interview questions are based on the formulated research questions (but they are of course not formulated in the same way). Questions can be open , i.e. allowing and inviting a broad range of answers and issues from the interviewed subject, or closed offering a limited set of alternative answers.

Interviews can, for example, be divided into unstructured , semi-structured and fully structured interviews (Robson 2002 ). In an unstructured interview, the interview questions are formulated as general concerns and interests from the researcher. In this case the interview conversation will develop based on the interest of the subject and the researcher. In a fully structured interview all questions are planned in advance and all questions are asked in the same order as in the plan. In many ways, a fully structured interview is similar to a questionnaire-based survey. In a semi-structured interview, questions are planned, but they are not necessarily asked in the same order as they are listed. The development of the conversation in the interview can decide which order the different questions are handled, and the researcher can use the list of questions to be certain that all questions are handled. Additionally, semi-structured interviews allow for improvisation and exploration of the studied objects. Semi-structured interviews are common in case studies. The different types of interviews are summarized in Table  4 .

An interview session may be divided into a number of phases. First the researcher presents the objectives of the interview and the case study, and explains how the data from the interview will be used. Then a set of introductory questions are asked about the background etc. of the subject, which are relatively simple to answer. After the introduction comes the main interview questions, which take up the largest part of the interview. If the interview contains personal and maybe sensitive questions, e.g. concerning economy, opinions about colleagues, why things went wrong, or questions related to the interviewees own competence (Hove and Anda 2005 ), special care must be taken. In this situation it is important that the interviewee is ensured confidentiality and that the interviewee trusts the interviewer. It is not recommended to start the interview with these questions or to introduce them before a climate of trust has been obtained. It is recommended that the major findings are summarized by the researcher towards the end of the interview, in order to get feedback and avoid misunderstandings.

Interview sessions can be structured according to three general principles, as outlined in Fig.  2 (Caroline Seaman, personal communication). The funnel model begins with open questions and moves towards more specific ones. The pyramid model begins with specific ones, and opens the questions during the course of the interview. The time-glass model begins with open questions, straightens the structure in the middle and opens up again towards the end of the interview.

General principles for interview sessions. a funnel, b pyramid, and c time-glass

During the interview sessions it is recommended to record the discussion in a suitable audio or video format. Even if notes are taken, it is in many cases hard to record all details, and it is impossible to know what is important to record during the interview. Possibly a dedicated and trained scribe may capture sufficient detail in real-time, but the recording should at least be done as a backup (Hove and Anda 2005 ). When the interview has been recorded it needs to be transcribed into text before it is analyzed. This is a time consuming task, but in many cases new insights are made during the transcription, and it is therefore not recommended that this task is conducted by anyone else than the researcher. In some cases it may be advantageous to have the transcripts reviewed by the interview subject. In this way questions about what was actually said can be sorted out, and the interview subject has the chance to point out if she does not agree with the interpretation of what was said or if she simply has changed her mind and wants to rephrase any part of the answers.

During the planning phase of an interview study it is decided whom to interview. Due to the qualitative nature of the case study it is recommended to select subjects based on differences instead of trying to replicate similarities, as discussed in Section 3.1 . This means that it is good to try to involve different roles, personalities, etc in the interview. The number of interviewees has to be decided during the study. One criterion for when sufficient interviews are conducted is “saturation”, i.e. when no new information or viewpoint is gained from new subjects (Corbin and Strauss 2008 ).

4.3 Observations

Observations can be conducted in order to investigate how a certain task is conducted by software engineers. This is a first or second degree method according to the classification in Section 4.1 . There are many different approaches for observation. One approach is to monitor a group of software engineers with a video recorder and later on analyze the recording, for example through protocol analysis (Owen et al. 2006 ; von Mayrhauser and Vans 1996 ). Another alternative is to apply a “think aloud” protocol, where the researcher are repeatedly asking questions like “What is your strategy?” and “What are you thinking?” to remind the subjects to think aloud. This can be combined with recording of audio and keystrokes as proposed e.g. by Wallace et al. ( 2002 ). Observations in meetings is another type, where meeting attendants interact with each other, and thus generate information about the studied object. An alternative approach is presented by Karahasanović et al. ( 2005 ) where a tool for sampling is used to obtain data and feedback from the participants.

Approaches for observations can be divided into high or low interaction of the researcher and high or low awareness of the subjects of being observed, see Table  5 .

Observations according to case 1 or case 2 are typically conducted in action research or classical ethnographic studies where the researcher is part of the team, and not only seen as a researcher by the other team members. The difference between case 1 and case 2 is that in case 1 the researcher is seen as an “observing participant” by the other subjects, while she is more seen as a “normal participant” in case 2. In case 3 the researcher is seen only as a researcher. The approaches for observation typically include observations with first degree data collection techniques, such as a “think aloud” protocol as described above. In case 4 the subjects are typically observed with a second degree technique such as video recording (sometimes called video ethnography).

An advantage of observations is that they may provide a deep understanding of the phenomenon that is studied. Further, it is particularly relevant to use observations, where it is suspected that there is a deviation between an “official” view of matters and the “real” case (Robinson et al. 2007 ). It should however be noted that it produces a substantial amount of data which makes the analysis time consuming.

4.4 Archival Data

Archival data refers to, for example, meeting minutes, documents from different development phases, organizational charts, financial records, and previously collected measurements in an organization. Benbasat et al. ( 1987 ) and Yin ( 2003 ) distinguish between documentation and archival records, while we treat them together and see the borderline rather between qualitative data (minutes, documents, charts) and quantitative data (records, metrics), the latter discussed in Section 4.5 .

Archival data is a third degree type of data that can be collected in a case study. For this type of data a configuration management tool is an important source, since it enables the collection of a number of different documents and different versions of documents. As for other third degree data sources it is important to keep in mind that the documents were not originally developed with the intention to provide data to research in a case study. A document may, for example, include parts that are mandatory according to an organizational template but of lower interest for the project, which may affect the quality of that part. It should also be noted that it is possible that some information that is needed by the researcher may be missing, which means that archival data analysis must be combined with other data collection techniques, e.g. surveys, in order to obtain missing historical factual data (Flynn et al. 1990 ). It is of course hard for the researcher to assess the quality of the data, although some information can be obtained by investigating the purpose of the original data collection, and by interviewing relevant people in the organization.

4.5 Metrics

The above mentioned data collection techniques are mostly focused on qualitative data. However, quantitative data is also important in a case study. Software measurement is the process of representing software entities, like processes, products, and resources, in quantitative numbers (Fenton and Pfleeger 1996 ).

Collected data can either be defined and collected for the purpose of the case study, or already available data can be used in a case study. The first case gives, of course, most flexibility and the data that is most suitable for the research questions under investigation.

The definition of what data to collect should be based on a goal-oriented measurement technique, such as the Goal Question Metric method (GQM) (Basili and Weiss 1984 ; van Solingen and Berghout 1999 ). In GQM, goals are first formulated, and the questions are refined based on these goals, and after that metrics are derived based on the questions. This means that metrics are derived based on goals that are formulated for the measurement activity, and thus that relevant metrics are collected. It also implies that the researcher can control the quality of the collected data and that no unnecessary data is collected.

Examples of already available data are effort data from older projects, sales figures of products, metrics of product quality in terms of failures etc. This kind of data may, for example, be available in a metrics database in an organization. When this kind of data is used it should be noticed that all the problems are apparent that otherwise are solved with a goal oriented measurement approach. The researcher can neither control nor assess the quality of the data, since it was collected for another purpose, and as for other forms of archival analysis there is a risk of missing important data.

4.6 Checklists

The checklist items for preparation and conduct of data collection are shown in Tables  6 and 7 , respectively.

5 Data Analysis

5.1 quantitative data analysis.

Data analysis is conducted differently for quantitative and qualitative data. For quantitative data, the analysis typically includes analysis of descriptive statistics, correlation analysis, development of predictive models, and hypothesis testing. All of these activities are relevant in case study research.

Descriptive statistics, such as mean values, standard deviations, histograms and scatter plots, are used to get an understanding of the data that has been collected. Correlation analysis and development of predictive models are conducted in order to describe how a measurement from a later process activity is related to an earlier process measurement. Hypothesis testing is conducted in order to determine if there is a significant effect of one or several variables (independent variables) on one or several other variables (dependent variables).

It should be noticed that methods for quantitative analysis assume a fixed research design. For example, if a question with a quantitative answer is changed halfway in a series of interviews, this makes it impossible to interpret the mean value of the answers. Further, quantitative data sets from single cases tend to be very small, due to the number of respondents or measurement points, which causes special concerns in the analysis.

Quantitative analysis is not covered any further in this paper, since it is extensively covered in other texts. The rest of this chapter covers qualitative analysis. For more information about quantitative analysis, refer for example to (Wohlin et al. 2000 ; Wohlin and Höst 2001 ; Kitchenham et al. 2002 ).

5.2 Qualitative Data Analysis

Since case study research is a flexible research method, qualitative data analysis methods (Seaman 1999 ) are commonly used. The basic objective of the analysis is to derive conclusions from the data, keeping a clear chain of evidence. The chain of evidence means that a reader should be able to follow the derivation of results and conclusions from the collected data (Yin 2003 ). This means that sufficient information from each step of the study and every decision taken by the researcher must be presented.

In addition to the need to keep a clear chain of evidence in mind, analysis of qualitative research is characterized by having analysis carried out in parallel with the data collection and the need for systematic analysis techniques. Analysis must be carried out in parallel with the data collection since the approach is flexible and that new insights are found during the analysis. In order to investigate these insights, new data must often be collected, and instrumentation such as interview questionnaires must be updated. The need to be systematic is a direct result of that the data collection techniques can be constantly updated, while the same time being required to maintain a chain of evidence.

In order to reduce bias by individual researchers, the analysis benefits from being conducted by multiple researchers. The preliminary results from each individual researcher is merged into a common analysis result in a second step. Keeping track and reporting the cooperation scheme helps increasing the validity of the study.

5.2.1 General Techniques for Analysis

There are two different parts of data analysis of qualitative data, hypothesis generating techniques and hypothesis confirmation techniques (Seaman 1999 ), which can be used for exploratory and explanatory case studies, respectively.

Hypothesis generation is intended to find hypotheses from the data. When using these kinds of techniques, there should not be too many hypotheses defined before the analysis is conducted. Instead the researcher should try to be unbiased and open for whatever hypotheses are to be found in the data. The results of these techniques are the hypotheses as such. Examples of hypotheses generating techniques are “constant comparisons” and “cross-case analysis” (Seaman 1999 ). Hypothesis confirmation techniques denote techniques that can be used to confirm that a hypothesis is really true, e.g. through analysis of more data. Triangulation and replication are examples of approaches for hypothesis confirmation (Seaman 1999 ). Negative case analysis tries to find alternative explanations that reject the hypotheses. These basic types of techniques are used iteratively and in combination. First hypotheses are generated and then they are confirmed. Hypothesis generation may take place within one cycle of a case study, or with data from one unit of analysis, and hypothesis confirmation may be done with data from another cycle or unit of analysis (Andersson and Runeson 2007b ).

This means that analysis of qualitative data is conducted in a series of steps (based on (Robson 2002 ), p. 459). First the data is coded, which means that parts of the text can be given a code representing a certain theme, area, construct, etc. One code is usually assigned to many pieces of text, and one piece of text can be assigned more than one code. Codes can form a hierarchy of codes and sub-codes. The coded material can be combined with comments and reflections by the researcher (i.e. “memos”). When this has been done, the researcher can go through the material to identify a first set of hypotheses. This can, for example, be phrases that are similar in different parts of the material, patterns in the data, differences between sub-groups of subjects, etc. The identified hypotheses can then be used when further data collection is conducted in the field, i.e. resulting in an iterative approach where data collection and analysis is conducted in parallel as described above. During the iterative process a small set of generalizations can be formulated, eventually resulting in a formalized body of knowledge, which is the final result of the research attempt. This is, of course, not a simple sequence of steps. Instead, they are executed iteratively and they affect each other.

The activity where hypotheses are identified requires some more information. This is in no way a simple step that can be carried out by following a detailed, mechanical, approach. Instead it requires ability to generalize, innovative thinking, etc. from the researcher. This can be compared to quantitative analysis, where the majority of the innovative and analytical work of the researcher is in the planning phase (i.e. deciding design, statistical tests, etc). There is, of course, also a need for innovative work in the analysis of quantitative data, but it is not as clear as in the planning phase. In qualitative analysis there are major needs for innovative and analytical work in both phases.

One example of a useful technique for analysis is tabulation, where the coded data is arranged in tables, which makes it possible to get an overview of the data. The data can, for example be organized in a table where the rows represent codes of interest and the columns represent interview subjects. However, how to do this must be decided for every case study.

There are specialized software tools available to support qualitative data analysis, e.g. NVivo and Atlas. However, in some cases standard tools such as word processors and spreadsheet tools are useful when managing the textual data.

5.2.2 Level of Formalism

A structured approach is, as described above, important in qualitative analysis. This means, for example, in all cases that a pre-planned approach for analysis must be applied, all decisions taken by the researcher must be recorded, all versions of instrumentation must be kept, links between data, codes, and memos must be explicitly recorded in documentation, etc. However, the analysis can be conducted at different levels of formalism. In (Robson 2002 ) the following approaches are mentioned:

Immersion approaches: These are the least structured approaches, with very low level of structure, more reliant on intuition and interpretive skills of the researcher. These approaches may be hard to combine with requirements on keeping and communicating a chain of evidence.

Editing approaches: These approaches include few a priori codes, i.e. codes are defined based on findings of the researcher during the analysis.

Template approaches: These approaches are more formal and include more a priori based on research questions.

Quasi-statistical approaches: These approaches are much formalized and include, for example, calculation of frequencies of words and phrases.

To our experience editing approaches and template approaches are most suitable in software engineering case studies. It is hard to present and obtain a clear chain of evidence in informal immersion approaches. It is also hard to interpret the result of, for example, frequencies of words in documents and interviews.

5.2.3 Validity

The validity of a study denotes the trustworthiness of the results, to what extent the results are true and not biased by the researchers’ subjective point of view. It is, of course, too late to consider the validity during the analysis. The validity must be addressed during all previous phases of the case study. However, the validity is discussed in this section, since it cannot be finally evaluated until the analysis phase.

There are different ways to classify aspects of validity and threats to validity in the literature. Here we chose a classification scheme which is also used by Yin ( 2003 ) and similar to what is usually used in controlled experiments in software engineering (Wohlin et al. 2000 ). Some researchers have argued for having a different classification scheme for flexible design studies (credibility, transferability, dependability, confirmability), while we prefer to operationalize this scheme for flexible design studies, instead of changing the terms (Robson 2002 ). This scheme distinguishes between four aspects of the validity, which can be summarized as follows:

Construct validity: This aspect of validity reflect to what extent the operational measures that are studied really represent what the researcher have in mind and what is investigated according to the research questions. If, for example, the constructs discussed in the interview questions are not interpreted in the same way by the researcher and the interviewed persons, there is a threat to the construct validity.

Internal validity: This aspect of validity is of concern when causal relations are examined. When the researcher is investigating whether one factor affects an investigated factor there is a risk that the investigated factor is also affected by a third factor. If the researcher is not aware of the third factor and/or does not know to what extent it affects the investigated factor, there is a threat to the internal validity.

External validity: This aspect of validity is concerned with to what extent it is possible to generalize the findings, and to what extent the findings are of interest to other people outside the investigated case. During analysis of external validity, the researcher tries to analyze to what extent the findings are of relevance for other cases. There is no population from which a statistically representative sample has been drawn. However, for case studies, the intention is to enable analytical generalization where the results are extended to cases which have common characteristics and hence for which the findings are relevant, i.e. defining a theory.

Reliability: This aspect is concerned with to what extent the data and the analysis are dependent on the specific researchers. Hypothetically, if another researcher later on conducted the same study, the result should be the same. Threats to this aspect of validity is, for example, if it is not clear how to code collected data or if questionnaires or interview questions are unclear.

It is, as described above, important to consider the validity of the case study from the beginning. Examples of ways to improve validity are triangulation, developing and maintaining a detailed case study protocol, having designs, protocols, etc. reviewed by peer researchers, having collected data and obtained results reviewed by case subjects, spending sufficient time with the case, and giving sufficient concern to analysis of “negative cases”, i.e. looking for theories that contradict your findings.

5.3 Checklist

The checklist items for analysis of collected data are shown in Table  8 .

6 Reporting

An empirical study cannot be distinguished from its reporting. The report communicates the findings of the study, but is also the main source of information for judging the quality of the study. Reports may have different audiences, such as peer researchers, policy makers, research sponsors, and industry practitioners (Yin 2003 ). This may lead to the need of writing different reports for difference audiences. Here, we focus on reports with peer researchers as main audience, i.e. journal or conference articles and possibly accompanying technical reports. Benbasat et al. propose that due to the extensive amount of data generated in case studies, “books or monographs might be better vehicles to publish case study research” (Benbasat et al. 1987 ).

Guidelines for reporting experiments have been proposed by Jedlitschka and Pfahl ( 2005 ) and evaluated by Kitchenham et al. ( 2008 ). Their work aims at defining a standardized reporting of experiments that enables cross-study comparisons through e.g. systematic reviews. For case studies, the same high-level structure may be used, but since they are more flexible and mostly based on qualitative data, the low-level detail is less standardized and more depending on the individual case. Below, we first discuss the characteristics of a case study report and then a proposed structure.

6.1 Characteristics

Robson defines a set of characteristics which a case study report should have (Robson 2002 ), which in summary implies that it should:

tell what the study was about

communicate a clear sense of the studied case

provide a “history of the inquiry” so the reader can see what was done, by whom and how.

provide basic data in focused form, so the reader can make sure that the conclusions are reasonable

articulate the researcher’s conclusions and set them into a context they affect.

In addition, this must take place under the balance between researcher’s duty and goal to publish their results, and the companies’ and individuals’ integrity (Amschler Andrews and Pradhan 2001 ).

Reporting the case study objectives and research questions is quite straightforward. If they are changed substantially over the course of the study, this should be reported to help understanding the case.

Describing the case might be more sensitive, since this might enable identification of the case or its subjects. For example, “a large telecommunications company in Sweden” is most probably a branch of the Ericsson Corporation. However, the case may be better characterized by other means than application domain and country. Internal characteristics, like size of the studied unit, average age of the personnel, etc may be more interesting than external characteristics like domain and turnover. Either the case constitutes a small subunit of a large corporation, and then it can hardly be identified among the many subunits, or it is a small company and hence it is hard to identify it among many candidates. Still, care must be taken to find this balance.

Providing a “history of the inquiry” requires a level of substantially more detail than pure reporting of used methodologies, e.g. “we launched a case study using semi-structured interviews”. Since the validity of the study is highly related to what is done, by whom and how, it must be reported about the sequence of actions and roles acting in the study process. On the other hand, there is no room for every single detail of the case study conduct, and hence a balance must be found.

Data is collected in abundance in a qualitative study, and the analysis has as its main focus to reduce and organize data to provide a chain of evidence for the conclusions. However, to establish trust in the study, the reader needs relevant snapshots from the data that support the conclusions. These snapshots may be in the form of e.g. citations (typical or special statements), pictures, or narratives with anonymized subjects. Further, categories used in the data classification, leading to certain conclusions may help the reader follow the chain of evidence.

Finally, the conclusions must be reported and set into a context of implications, e.g. by forming theories. A case study can not be generalized in the meaning of being representative of a population, but this is not the only way of achieving and transferring knowledge. Conclusions can be drawn without statistics, and they may be interpreted and related to other cases. Communicating research results in terms of theories is an underdeveloped practice in software engineering (Hannay et al. 2007 ).

6.2 Structure

Yin proposes several alternative structures for reporting case studies in general (Yin 2003 ).

Linear-analytic—the standard research report structure (problem, related work, methods, analysis, conclusions)

Comparative—the same case is repeated twice or more to compare alternative descriptions, explanations or points of view.

Chronological—a structure most suitable for longitudinal studies.

Theory-building—presents the case according to some theory-building logic in order to constitute a chain of evidence for a theory.

Suspense—reverts the linear-analytic structure and reports conclusions first and then backs them up with evidence.

Unsequenced—with none of the above, e.g. when reporting general characteristics of a set of cases.

For the academic reporting of case studies which we focus on, the linear-analytic structure is the most accepted structure. The high level structure for reporting experiments in software engineering proposed by Jedlitschka and Pfahl ( 2005 ) therefore also fits the purpose of case study reporting. However, some changes are needed, based on specific characteristics of case studies and other issues based on an evaluation conducted by Kitchenham et al. ( 2008 ). The resulting structure is presented in Table  9 . The differences and our considerations are presented below.

In a case study, the theory may constitute a framework for the analysis; hence, there are two kinds of related work: a) earlier studies on the topic and b) theories on which the current study is based.

The design section corresponds to the case study protocol, i.e. it reports the planning of the case study including the measures taken to ensure the validity of the study.

Since the case study is of flexible design, and data collection and analysis are more intertwined, these sections may be combined into one. Consequently, the contents at the lower level must be adjusted, as proposed in Table  9 . Specifically for the combined data section, the coding scheme often constitutes a natural subsection structure. Alternatively, for a comparative case study, the data section may be structured according to the compared cases, and for a longitudinal study, the time scale may constitute the structure of the data section. This combined results section also includes an evaluation of the validity of the final results.

6.3 Checklist

The checklist items for reporting are shown in Table  10 .

7 Reading and Reviewing Case Study Research

7.1 reader’s perspective.

The reader of a case study report—independently of whether the intention is to use the findings or to review it for inclusion in a journal—must judge the quality of the study based on the written material. Case study reports tend to be large, firstly since case studies often are based on qualitative data, and hence the data cannot be presented in condensed form, like quantitative data may be in tables, diagrams and statistics. Secondly, the conclusions in qualitative analyses are not based on statistical significance which can be interpreted in terms of a probability for erroneous conclusion, but on reasoning and linking of observations to conclusions.

Reviewing empirical research in general must be done with certain care (Tichy 2000 ). Reading case study reports requires judging the quality of the report, without having the power of strict criteria which govern experimental studies to a larger extent, e.g. statistical confidence levels. This does however not say that any report can do as a case study report. The reader must have a decent chance of finding the information of relevance, both to judge the quality of the case study and to get the findings from the study and set them into practice or build further research on.

The criteria and guidance presented above for performing and reporting case studies are relevant for the reader as well. However, in our work with derivation of checklists for case study research (Höst and Runeson 2007 ), evaluation feedback identified a need for a more condensed checklist for readers and reviewers. This is presented in Table  11 with numbers referring to the items of the other checklists for more in depth criteria.

Case study research is conducted in order to investigate contemporary phenomena in their natural context. That is, no laboratory environment is set up by the researcher, where factors can be controlled. Instead the phenomena are studied in their normal context, allowing the researcher to understand how the phenomena interact with the context. Selection of subjects and objects is not based on statistically representative samples. Instead, research findings are obtained through the analysis in depth of typical or special cases.

Cases study research is conducted by iteration over a set of phases. In the design phase objectives are decided and the case is defined. Data collection is first planned with respect to data collection techniques and data sources, and then conducted in practice. Methods for data collection include, for example, interviews, observation, and usage of archival data. During the analysis phase, insights are both generated and analyzed, e.g. through coding of data and looking for patterns. During the analysis it is important to maintain a chain of evidence from the findings to the original data. The report should include sufficient data and examples to allow the reader to understand the chain of evidence.

This paper aims to provide a frame of reference for researchers when conducting case study research in software engineering, which is based on an analysis of existing case study literature and the author’s own experiences of conducting case studies. As with other guidelines, there is a need to evaluate them through practical usage.

Easterbrook et al. distinguish between exploratory and confirmatory case studies. We interpret Robson’s explanatory category being closely related to Easterbrook’s confirmatory category.

Robson denotes this category “emancipatory” in the social science context, while improvement is our adaptation to an engineering context.

Amschler Andrews A, Pradhan AS (2001) Ethical issues in empirical software engineering: the limits of policy. Empir Softw Eng 6(2):105–110 doi: 10.1023/A:1011442319273

Article   MATH   Google Scholar  

Anastas JW, MacDonald ML (1994) Research design for the social work and the human services. New York, Lexington

Google Scholar  

Andersson C, Runeson P (2007a) A replicated quantitative analysis of fault distribution in complex software systems. IEEE Trans Softw Eng 33(5):273–286 doi: 10.1109/TSE.2007.1005

Article   Google Scholar  

Andersson C, Runeson P (2007b) A spiral process model for case studies on software quality monitoring—method and metrics. Softw Process Improv Pract 12(2):125–140 doi: 10.1002/spip.311

Avison D, Baskerville R, Myers M (2001) Controlling action research projects. Inf Technol People 14(1):28–45 doi: 10.1108/09593840110384762

Basili VR, Weiss DM (1984) A methodology for collecting valid software engineering data. IEEE Trans Softw Eng SE10(6):728–739

Basili VR, Selby RW, Hutchens DH (1986) Experimentation in Software Engineering. IEEE Trans Softw Eng SE12(7):733–744

Baskerville RL, Wood-Harper AT (1996) A critical perspective on action research as method for information systems research. J Inf Technol 11:235–246 doi: 10.1080/026839696345289

Benbasat I, Goldstein DK, Mead M (1987) The case research strategy in studies of information systems. MIS Q 11(3):369–386 doi: 10.2307/248684

Corbin J, Strauss C (2008) Basics of qualitative research, 3rd edn. Sage

Dalkey N, Helmer O (1963) An experimental application of the delphi method to the use of experts. Manage Sci 9(3):458–467

Dittrich Y (ed) (2007) Special issue on qualitative software engineering research. Inf Softw Technol 49(6):531–694. doi: 10.1016/j.infsof.2007.02.009

Dittrich Y, Rönkkö K, Eriksson J, Hansson C, Lindeberg O (2008) Cooperative method development. combining qualitative empirical research with method, technique and process improvement. Empir Softw Eng 13(3):231–260 doi: 10.1007/s10664-007-9057-1

Eisenhardt KM (1989) Building theories form case study research. Acad Manage Rev 14(4):532–550 doi: 10.2307/258557

Easterbrook S, Singer J, Storey M-A, Damian D (2008) Selecting empirical methods for software engineering research, Chapter 11 in Shull et al. (2008)

Fenton N, Pfleeger SG (1996) Software Metrics — A Rigorous and Practical Approach , Thomson Computer

Flynn BB, Sakakibara S, Schroeder RG, Bates K, Flynn EJ (1990) Empirical research methods in operations management. Oper Manage 9(2):250–284 doi: 10.1016/0272-6963(90)90098-X

Flyvbjerg B (2007) Five misunderstandings about case-study research. In Qualitative Research Practice: Concise Paperback Edition . Sage, pp 390–404

Gorschek T, Garre P, Larsson S, Wohlin C (2006) A model for technology transfer in practice. IEEE Softw 23(6):88–95 doi: 10.1109/MS.2006.147

Hannay JE, Sjøberg DIK, Dybå TA (2007) Systematic review of theory use in software engineering experiments. IEEE Trans Softw Eng 33(2):87–107 doi: 10.1109/TSE.2007.12

Hove SE, Anda BCD (2005) Experiences from conducting semi-structured interviews in empirical software engineering research. Proceedings 11th IEEE International Software Metrics Symposium (Metrics 2005) 23:1–10

Höst M, Runeson P (2007) Checklists for Software Engineering Case Study Research, In Proceedings First International Symposium on Empirical Software Engineering and Measurement , pp 479–481

Iversen JH, Mathiassen L, Nielsen PA (2004) Managing risk in software process improvement: an action research approach. MIS Q 28(3):395–433

Jedlitschka A, Pfahl D (2005) Reporting guidelines for controlled experiments in software engineering, In Proceedings of ACM/IEEE International Symposium on Empirical Software Engineering , pp 95–104, see also Chapter 8 in Shull et al. (2008)

Johnson P, Kou H, Paulding M, Zhang Q, Kagawa A, Yamashita T (2005) Improving software development management through software project telemetry. IEEE Softw 22(4):76–85 doi: 10.1109/MS.2005.95

Karahasanović A, Anda B, Arisholm E, Hove SE, Jørgensen M, Sjøberg DIK, Welland R (2005) Collecting feedback during software engineering experiments. Empir Softw Eng 10:113–147 doi: 10.1007/s10664-004-6189-4

Karlström D (2004) Integrating Management and Engineering Processes in Software Product Development , PhD Thesis ISRN LUTEDX/TETS—1069-SE+230p, Lund University.

Karlström D, Runeson P (2005) Combining agile methods with stage-gate project management. IEEE Softw 22(3):43–49 doi: 10.1109/MS.2005.59

Karlström D, Runeson P (2006) Integrating agile software development into stage-gate product development. Empir Softw Eng 11:203–225 doi: 10.1007/s10664-006-6402-8

Klein HK, Myers MD (1999) A set of principles for conducting and evaluating interpretative field studies in information systems. MIS Q 23(1):67–88 doi: 10.2307/249410

Kitchenham B, Pickard L, Pfleeger SL (1995) Case studies for method and tool evaluation. IEEE Softw 4(12):52–62 doi: 10.1109/52.391832

Kitchenham B, Pfleeger SM, Pickard LM, Jones PW, Hoaglin DC, El Eman K, Rosenberg J (2002) Preliminary guidelines for empirical research in software engineering. IEEE Trans Softw Eng 28(8):721–734 doi: 10.1109/TSE.2002.1027796

Kitchenham B (2007) Guidelines for performing Systematic Literature Reviews in Software Engineering , Version 2.3, EBSE Technical Report EBSE-2007-01, Keele University and University of Durham

Kitchenham B, Al-Khilidar H, Ali Babar M, Berry M, Cox K, Keung J, Kurniawati F, Staples M, Zhang H, Zhu L (2008) Evaluating guidelines for reporting empirical software engineering studies. Empir Softw Eng 13(1):97–121 doi: 10.1007/s10664-007-9053-5

Lee AS (1989) A scientific methodology for MIS case studies. MIS Q 13(1):33–54 doi: 10.2307/248698

Lethbridge TC, Sim SE, Singer J (2005) Studying software engineers: data collection techniques for software field studies. Empir Softw Eng 10(3):311–341 see also Chapter 1 in Shull et al. (2008)

Moher T, Schneider GM (1981) Methods for improving controlled experimentation in software engineering, Proceedings of the 5th International Conference on Software Engineering pp 224–233

Owen S, Budgen D, Brereton P (2006) Protocol analysis: a neglected practice. Commun ACM 49(2):117–122 doi: 10.1145/1113034.1113039

Perry DE, Sim SE, Easterbrook S (2005) Case studies for software engineers, 29th Annual IEEE/NASA Software Engineering Workshop — Tutorial Notes pp 96–159

Pervan G, Maimbo H (2005) Designing a case study protocol for application in IS research, The Ninth Pacific Conference on Information Systems pp 1281–1292

Ramesh V, Glass RL, Vessey I (2004) Research in computer science: an empirical study. J Syst Softw 70(1–2):165–176 doi: 10.1016/S0164-1212(03)00015-3

Regnell B, Höst M, Natt och Dag J, Beremark P, Hjelm T (2001) An industrial case study on distributed prioritisation in market-driven requirements engineering for packaged software. Requirements Eng 6:51–62 doi: 10.1007/s007660170015

Robinson H, Segal J, Sharp H (2007) Ethnographically-informed empirical studies of software practice. Inf Softw Technol 49:540–551 doi: 10.1016/j.infsof.2007.02.007

Robson C (2002) Real World Research . Blackwell, (2nd edition)

Seaman C (1999) Qualitative methods in empirical studies of software engineering. IEEE Trans Softw Eng 25(4):557–572 see also Chapter 2 in Shull et al. (2008)

Sharp H, Robinson H (2004) An ethnographic study of XP practice. Empir Softw Eng 9(4):353–375 doi: 10.1023/B:EMSE.0000039884.79385.54

Shull F, Feldman RL (2008) Building theories from multiple evidence sources. In: Shull F et al (ed) Guide to advanced empirical software engineering. Springer-Verlag, London

Chapter   Google Scholar  

Shull F, Basili V, Carver J, Maldonado JC, Travassos GH, Mendonca M, Fabbri S (2002) Replicating software engineering experiments: addressing the tacit knowledge problem, Proceedings on International Symposium Empirical Software Engineering pp 7–16

Shull F, Singer J, Sjøberg D (eds) (2008) Guide to Advanced Empirical Software Engineering. Springer-Verlag: London

Sim SE, Singer J, Storey M-A (2001) Beg, borrow, or steal: using multidisciplinary approaches in empirical software engineering research, an ICSE 2000 workshop report. Empir Softw Eng 6(1):85–93 doi: 10.1023/A:1009809824225

Singer J, Vinson NG (2002) Ethical issues in empirical studies of software engineering. IEEE Trans Softw Eng 28(12):1171–1180 doi: 10.1109/TSE.2002.1158289

Sjøberg DIK, Dybå T, Anda BCD, Hannay J (2008) Building theories in software engineering. In: Shull F et al (ed) Guide to advanced empirical software engineering. Springer-Verlag, London

Sjøberg DIK, Hannay JE, Hansen O, Kampenes VB (2005) A survey of controlled experiments in software engineering. IEEE Trans Softw Eng 31(9):733–753 doi: 10.1109/TSE.2005.97

Stake RE (1995) The art of case study research . Sage

Tichy WF (1998) Should computer scientists experiment more? Computer 31(5):32–40 doi: 10.1109/2.675631

Article   MathSciNet   Google Scholar  

Tichy WF (2000) Hints for reviewing empirical work in software engineering. Empir Softw Eng 5(4):309–312 doi: 10.1023/A:1009844119158

van Solingen R, Berghout E (1999) The goal/question/metric method. A practical guide for quality improvement of software development . McGraw-Hill

von Mayrhauser A, Vans AM (1996) Identification of dynamic comprehension processes during large scale maintenance. IEEE Trans Softw Eng 22(6):424–438 doi: 10.1109/32.508315

Wallace C, Cook C, Summet J, Burnett M (2002) Human centric computing languages and environments. Proceeding Symposia on Human Centric Computing Languages and Environments pp 63–65

Wohlin C, Höst M (2001) Special section: controlled experiments in software engineering, guest editorial. Inf Softw Technol 43(15):921–924 doi: 10.1016/S0950-5849(01)00200-2

Wohlin C, Höst M, Ohlsson MC, Regnell B, Runeson P, Wesslén A (2000) Experimentation in software engineering — an introduction . Kluwer

Wohlin C, Höst M, Henningsson K (2003) Empirical research methods in software engineering. In: Conradi R, Wang AI (eds) Empirical Methods and Studies in Software Engineering — Experiences from ESERNET , Springer

Yin RK (2003) Case study research. Design and methods, 3rd edn. London, Sage

Zelkowitz MV, Wallace RW (1998) Experimental models for validating technology. IEEE Comput 31(5):23–31

Download references

Acknowledgement

The authors are grateful to the feedback to the checklists from the ISERN members and IASESE attendants in September 2007. A special thank to Professor Claes Wohlin, Mr. Kim Weyns and Mr. Andreas Jedlitschka for their review of an earlier draft of the paper. Thanks also to the anonymous reviewers for proposals on substantial improvements. The work is partly funded by the Swedish Research Council under grant 622-2004-552 for a senior researcher position in software engineering.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Author information

Authors and affiliations.

Department Computer Science, Lund University, Box 118, SE-221 00, Lund, Sweden

Per Runeson & Martin Höst

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Per Runeson .

Additional information

Editor: D. Sjoberg

Rights and permissions

Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License ( https://creativecommons.org/licenses/by-nc/2.0 ), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

Reprints and permissions

About this article

Runeson, P., Höst, M. Guidelines for conducting and reporting case study research in software engineering. Empir Software Eng 14 , 131–164 (2009). https://doi.org/10.1007/s10664-008-9102-8

Download citation

Published : 19 December 2008

Issue Date : April 2009

DOI : https://doi.org/10.1007/s10664-008-9102-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research methodology
  • Find a journal
  • Publish with us
  • Track your research
  • Software Engineering Tutorial
  • Software Development Life Cycle
  • Waterfall Model
  • Software Requirements
  • Software Measurement and Metrics
  • Software Design Process
  • System configuration management
  • Software Maintenance
  • Software Development Tutorial
  • Software Testing Tutorial
  • Product Management Tutorial
  • Project Management Tutorial
  • Agile Methodology
  • Selenium Basics
  • BCA 6th Semester Subjects and Syllabus (2023)

Computer Network Security

  • Network Security
  • A Model for Network Security
  • IPSec Architecture
  • Web Security Considerations
  • System Security

Information System Analysis Design and Implementation

  • Differences between System Analysis and System Design
  • Activities involved in Software Requirement Analysis
  • Types of Feasibility Study in Software Project Development
  • System Design Tutorial
  • User Interface Design - Software Engineering

Computer Aided Software Engineering (CASE)

  • Object-Oriented Analysis and Design(OOAD)
  • Dynamic modelling in object oriented analysis and design
  • Software Engineering | Software Project Management Complexities
  • Scope of e-Business : B2B | B2C | C2C | Intra B-Commerce
  • Difference between Internet and Extranet
  • What is Extranet? Definition, Implementation, Features
  • What is an Intranet?
  • Meaning and Benefits of e-Banking

Knowledge Management

  • What is Business Intelligence?
  • Difference between Business Intelligence and Business Analytics
  • Difference between EIS and DSS
  • Data Mining Techniques
  • Data Mining Tutorial
  • Knowledge Management: Meaning, Concept, Process and Significance
  • BCA 1st Semester Syllabus (2023)
  • BCA 2nd Semester Syllabus (2023)
  • BCA 3rd Semester Syllabus (2023)
  • BCA 4th Semester Syllabus (2023)
  • BCA 5th Semester Syllabus (2023)
  • BCA Full Form
  • Bachelor of Computer Applications: Curriculum and Career Opportunity

Computer-aided software engineering (CASE) is the implementation of computer-facilitated tools and methods in software development. CASE is used to ensure high-quality and defect-free software. CASE ensures a check-pointed and disciplined approach and helps designers, developers, testers, managers, and others to see the project milestones during development. 

CASE can also help as a warehouse for documents related to projects, like business plans, requirements, and design specifications. One of the major advantages of using CASE is the delivery of the final product, which is more likely to meet real-world requirements as it ensures that customers remain part of the process. 

CASE illustrates a wide set of labor-saving tools that are used in software development. It generates a framework for organizing projects and to be helpful in enhancing productivity. There was more interest in the concept of CASE tools years ago, but less so today, as the tools have morphed into different functions, often in reaction to software developer needs. The concept of CASE also received a heavy dose of criticism after its release. 

What is CASE Tools?

The essential idea of CASE tools is that in-built programs can help to analyze developing systems in order to enhance quality and provide better outcomes. Throughout the 1990, CASE tool became part of the software lexicon, and big companies like IBM were using these kinds of tools to help create software. 

Various tools are incorporated in CASE and are called CASE tools, which are used to support different stages and milestones in a software development life cycle. 

Types of CASE Tools:

  • Diagramming Tools:  It helps in diagrammatic and graphical representations of the data and system processes. It represents system elements, control flow and data flow among different software components and system structures in a pictorial form. For example, Flow Chart Maker tool for making state-of-the-art flowcharts.  
  • Computer Display and Report Generators:  These help in understanding the data requirements and the relationships involved. 
  • (i) Accept 360, Accompa, CaseComplete for requirement analysis. 
  • (ii) Visible Analyst for total analysis.   
  • Central Repository:  It provides a single point of storage for data diagrams, reports, and documents related to project management.
  • Documentation Generators:  It helps in generating user and technical documentation as per standards. It creates documents for technical users and end users.  For example, Doxygen, DrExplain, Adobe RoboHelp for documentation.  
  • Code Generators:  It aids in the auto-generation of code, including definitions, with the help of designs, documents, and diagrams.
  • Tools for Requirement Management: It makes gathering, evaluating, and managing software needs easier.
  • Tools for Analysis and Design : It offers instruments for modelling system architecture and behaviour, which helps throughout the analysis and design stages of software development.
  • Tools for Database Management: It facilitates database construction, design, and administration.
  • Tools for Documentation: It makes the process of creating, organizing, and maintaining project documentation easier.

Advantages of the CASE approach: 

  • Improved Documentation: Comprehensive documentation creation and maintenance is made easier by CASE tools. Since automatically generated documentation is usually more accurate and up to date, there are fewer opportunities for errors and misunderstandings brought on by out-of-current material.
  • Reusing Components: Reusable component creation and maintenance are frequently facilitated by CASE tools. This encourages a development approach that is modular and component-based, enabling teams to shorten development times and reuse tested solutions.
  • Quicker Cycles of Development: Development cycles take less time when certain jobs, such testing and code generation, are automated. This may result in software solutions being delivered more quickly, meeting deadlines and keeping up with changing business requirements.
  • Improved Results : Code generation, documentation, and testing are just a few of the time-consuming, repetitive operations that CASE tools perform. Due to this automation, engineers are able to concentrate on more intricate and imaginative facets of software development, which boosts output.
  • Achieving uniformity and standardization:  Coding conventions, documentation formats and design patterns are just a few of the areas of software development where CASE tools enforce uniformity and standards. This guarantees consistent and maintainable software development.

Disadvantages of the CASE approach: 

  • Cost: Using a case tool is very costly. Most firms engaged in software development on a small scale do not invest in CASE tools because they think that the benefit of CASE is justifiable only in the development of large systems.
  • Learning Curve: In most cases, programmers’ productivity may fall in the initial phase of implementation, because users need time to learn the technology. Many consultants offer training and on-site services that can be important to accelerate the learning curve and to the development and use of the CASE tools.
  • Tool Mix: It is important to build an appropriate selection tool mix to urge cost advantage CASE integration and data integration across all platforms is extremely important.

Conclusion:

In today’s software development world, computer-aided software engineering is a vital tool that enables teams to produce high-quality software quickly and cooperatively. CASE tools will probably become more and more essential as technology develops in order to satisfy the demands of complicated software development projects.

Please Login to comment...

  • Software Engineering
  • 10 Best Screaming Frog Alternatives in 2024
  • 10 Best Serpstat Alternatives in 2024
  • Top 15 Fastest Roller Coasters in the World
  • 10 Best Mint Alternatives in 2024 (Free)
  • 30 OOPs Interview Questions and Answers (2024)

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is a Case Study? | Definition, Examples & Methods

What Is a Case Study? | Definition, Examples & Methods

Published on May 8, 2019 by Shona McCombes . Revised on November 20, 2023.

A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research.

A case study research design usually involves qualitative methods , but quantitative methods are sometimes also used. Case studies are good for describing , comparing, evaluating and understanding different aspects of a research problem .

Table of contents

When to do a case study, step 1: select a case, step 2: build a theoretical framework, step 3: collect your data, step 4: describe and analyze the case, other interesting articles.

A case study is an appropriate research design when you want to gain concrete, contextual, in-depth knowledge about a specific real-world subject. It allows you to explore the key characteristics, meanings, and implications of the case.

Case studies are often a good choice in a thesis or dissertation . They keep your project focused and manageable when you don’t have the time or resources to do large-scale research.

You might use just one complex case study where you explore a single subject in depth, or conduct multiple case studies to compare and illuminate different aspects of your research problem.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

what is a computer case study

Once you have developed your problem statement and research questions , you should be ready to choose the specific case that you want to focus on. A good case study should have the potential to:

  • Provide new or unexpected insights into the subject
  • Challenge or complicate existing assumptions and theories
  • Propose practical courses of action to resolve a problem
  • Open up new directions for future research

TipIf your research is more practical in nature and aims to simultaneously investigate an issue as you solve it, consider conducting action research instead.

Unlike quantitative or experimental research , a strong case study does not require a random or representative sample. In fact, case studies often deliberately focus on unusual, neglected, or outlying cases which may shed new light on the research problem.

Example of an outlying case studyIn the 1960s the town of Roseto, Pennsylvania was discovered to have extremely low rates of heart disease compared to the US average. It became an important case study for understanding previously neglected causes of heart disease.

However, you can also choose a more common or representative case to exemplify a particular category, experience or phenomenon.

Example of a representative case studyIn the 1920s, two sociologists used Muncie, Indiana as a case study of a typical American city that supposedly exemplified the changing culture of the US at the time.

While case studies focus more on concrete details than general theories, they should usually have some connection with theory in the field. This way the case study is not just an isolated description, but is integrated into existing knowledge about the topic. It might aim to:

  • Exemplify a theory by showing how it explains the case under investigation
  • Expand on a theory by uncovering new concepts and ideas that need to be incorporated
  • Challenge a theory by exploring an outlier case that doesn’t fit with established assumptions

To ensure that your analysis of the case has a solid academic grounding, you should conduct a literature review of sources related to the topic and develop a theoretical framework . This means identifying key concepts and theories to guide your analysis and interpretation.

There are many different research methods you can use to collect data on your subject. Case studies tend to focus on qualitative data using methods such as interviews , observations , and analysis of primary and secondary sources (e.g., newspaper articles, photographs, official records). Sometimes a case study will also collect quantitative data.

Example of a mixed methods case studyFor a case study of a wind farm development in a rural area, you could collect quantitative data on employment rates and business revenue, collect qualitative data on local people’s perceptions and experiences, and analyze local and national media coverage of the development.

The aim is to gain as thorough an understanding as possible of the case and its context.

Prevent plagiarism. Run a free check.

In writing up the case study, you need to bring together all the relevant aspects to give as complete a picture as possible of the subject.

How you report your findings depends on the type of research you are doing. Some case studies are structured like a standard scientific paper or thesis , with separate sections or chapters for the methods , results and discussion .

Others are written in a more narrative style, aiming to explore the case from various angles and analyze its meanings and implications (for example, by using textual analysis or discourse analysis ).

In all cases, though, make sure to give contextual details about the case, connect it back to the literature and theory, and discuss how it fits into wider patterns or debates.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Ecological validity

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, November 20). What Is a Case Study? | Definition, Examples & Methods. Scribbr. Retrieved March 20, 2024, from https://www.scribbr.com/methodology/case-study/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, primary vs. secondary sources | difference & examples, what is a theoretical framework | guide to organizing, what is action research | definition & examples, unlimited academic ai-proofreading.

✔ Document error-free in 5minutes ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Introduction

What is a Case Study? A cast study describes a programming problem, the process used by an expert to solve the problem, and one or more solutions to the problem. Case studies emphasize the decisions encountered by the programmer and the criteria used to choose among alternatives.

A sample case study is outlined in Appendix A. It deals with the problem of completing a program to print a calendar for a given year. The problem specifies that two subprograms are to be supplied: a function NumberOfDaysIn , which returns the number of days in a given month in a given year, and a procedure PrintMonth , which prints the calendar "page" for a given month. (The solution to this problem is treated in more detail in Clancy and Linn [4].)

Why aren't case studies more commonly used? The case study approach is atypical in introductory and intermediate programming courses. One reason is the relative scarcity of appropriate material. Published sources of case studies include the "Literate Programming" columns in Communications of the ACM [13], the books by Bentley [1][2], Kernighan and Plauger [6], and Clancy and Linn [4], and excerpts from books such as Ledgard and Tauer [9], Kruse [8], and Reges [12]. Much of this material is intended for experts or teachers, not students.

Many CS 1 and CS 2 courses are packed with details, and instructors may believe they have no room to include case studies. In such courses, however, it is easy for students to lose sight of the big picture, gain only superficial understanding of program design, and fail to appreciate the problem-solving power of a programming language.

Perhaps another reason that case studies aren't used is an impression by instructors that novice programmers are not ready, or inclined, to appreciate the issues discussed in a case study. Personal experience and informal surveys indicate, however, that students learn from case studies both in college introductory courses and in precollege programming classes.

Lastly, instructors may not be aware of the variety of ways to incorporate case studies into their classes? hence this paper. We describe how case studies can enhance laboratory and homework exercises, small group work, examinations, and lectures. We or all colleagues at Berkeley and elsewhere have tested, in introductory and intermediate programming courses, all the techniques we describe. These techniques are useful in more advanced settings as well.

Laboratory and homework assignments A case study presents opportunities for students to analyze, modify, or extend a large program, or reuse the code to solve a related problem. The accompanying narrative makes the code easier for the student to understand, and thus reduces the complexity and maximizes the benefit of the assigned exercises.

One set of exercises requires students to use a compiled executable version of the code. A version with the sizes of the data structures reduced for experiments is provided. With executable code? a source listing need not be provided? Students can still accumulate a substantial amount of information about a program, by predicting its behavior given sample input, providing input that produces a given bahavior, distinguishing legal from illegal input, and devising good examples to teach new users about the program. They can also probe the limits of the program: How much input can it handle? What are constraints on the input format? How are out-of-bounds or overflow cases handled? Finally, students can evaluate the user interface, and compare it and the program's capabilities to other similar programs they have used.

With online source code, students working together can play "debugging games". One partner (or staff member if students are to work individually) inserts a bug into the program; the other attempts to find it. This requires that students have read the program but do not have the code listing nearby. It encourages students to invent thorough sets of test data, and to think about what aspects of a program's style and organization facilitate testing and debugging.

Typical laboratory or homework assignments using online code include modifying a program to change its user interface, replacing its data structures, and adding or extending features. Such activities can be profitably done in teams as well as individually. They illustrate the importance of code readability, planning, and incremental development, and introduce subtle issues, for instance, that the program should be modified in the style in which it is written.

Other online exercises include using the code in a larger application, or solving a similar problem. The advantages of rewriting reusable code are apparent in both activities.

Students can also be encouraged to create their own case studies. This activity reveals student thinking and helps instructors make sense of students' understanding of the material. Instructors can also incorporate the student solutions into subsequent course activities.

One other activity, that of solving the problem before seeing the solution and accompanying discussion, would seem to be good preparation for a case study. Students might be expected to be more sensitive to the decisions described in the narrative. Linn and Clancy [10] noted, however, that this approach was not always productive. Some students, having written one program to solve the problem, weren't interested in alternative.

Example exercises involving the "Calendar" case study The solution programs in teh "Calendar" case study read a year from from the user, then print the calendar for that year. Students might be asked to perform experiments with executable versions of these programs, to determine how the programs handle years before the Gregorian reform of 1582, and whether erratic behavior results from negative or exceptionally large year values.

There are many places in the program to insert bugs, such as off-by-one initializations and off-by-one or reversed comparisons.

Possible modifications of the programs include highlighting of holidays or other significant days, and printing weekend days in a special format. Code from the calendar programs can be reused in a variety of applications involving date computations, and in programs to produce different kinds of calendars. Examples of the latter are the Jewish, Muslim, Mayan, Chinese, and French Revolutionary calendars, and a fantasy calendar without Mondays.

Small-group discussion Discussion is most effective when students can contribute diverse perspectives and expertise. Naturally-occurring differences in problem-solving style among group members provide good grist for discussion and brainstorming. For instance, what aspects of the style and organization of the program make it easy or hard to understand, and why? What parts of the program match code that group members have seen before? Which of the design or development decisions would group members have made differently, and why?

One way to ensure that students have different types of expertise is to ask subgroups to study different case studies and then present the ideas to the rest of the class.

Discussion can also take advantage of previous online exercises. What were various ways of approaching these exercises? What aspects of the style and organization of the program made it easy or hard to modify? How does one set of test results provide better evidence for the correctness of the program than another? How did a partnership divide the problem, and how were the skills of the partners put to effective use?

Finally, discussion can provide opportunities for students to reflect on their own behaviors. The discussion leader might ask the group to compare their abilities to detect errors in output, to locate errors in code once they�ve been detected, and to find the simplest ways to fix the errors. Discussion might also encourage students to recognize and admit their programming weaknesses, such as propensities to "rush to the computer" or to test too much code at once.

How can these skills be assessed? In the typical programming course, the end product of a programming assignment is graded rather than the supposedly "disciplined approach to design, coding, and testing" that created it. An exam is constrained by limits on the time available to take it and the time available to grade it. Exam questions tend to focus on isolated facts rather than on complex problems.

Case studies allow assessment of analysis, design, and development in the context of a challenging problem. Here are some example question patterns. They must, of course, be asked in the context of a particular case study.

Such questions, based on the context provided by the case study, are much less open-ended and much easier to grade than entire programs. They also require much less reading during an examination, since case studies can be reviewed in advance, than do questions in which a context must be set up from scratch. They therefore allow good programmers with reading deficiencies a better opportunity to display their knowledge.

The narrative description of design and development decisions is an important component for assessment. Linn and Clancy [10] found that students who received expert commentary did significantly better on their tests than students who received documented code without commentary.

Examples of assessment using the "Calendar" case study The programs described in the "Calendar" case study, though each no more than two pages of code, provide surprising opportunities for questions about analysis, design, and development. Here are some examples.

Lectures One might guess that lecturing about a case study would be difficult; the narrative description contains much of what a lecturer might wish to say about the problem solution. How, then, can a lecturer provide an interesting, enthusiastic presentation for students?

One good approach is to give suggestions about how to read the narrative and program: how to find the important parts, how to take notes on the program, what experiments to try, and what collection of input data should be built to test the program. The lecturer can also point out what programming patterns and good habits are illustrated in the case study, and relate them to students� previous experiences in the class. In addition, the lecturer might enrich the problem solution by supplying missing information and structure for the students, and by adapting the material to their experience and background. Hints for the homework are an obvious source of material. Much of the published case study material discusses the design but not the development stage; the lecturer might present a model sequence for testing and debugging the program.

Another source of lecture material is extension of the ideas in the case study. A lecturer might discuss how segments of the program could be used for other problem solutions, or discuss problems that can be solved in ways illustrated in the case study. A lecturer might also talk about more general applications of case study activities. An example might be a technique like mutation testing, in which the goal is to build a test suite that catches standard types of errors intentionally introduced into the program (see Budd [3]).

Finally, the lecturer can personalize the presentation with his or her opinions about controversial aspects of the design or development. Discussion of the lecturer's personal experience with similar problems or approaches can fascinate a class.

Summary As textbooks for CS 1 and CS 2 get thicker and thicker, are your students memorizing more and understanding less? Case studies allow students, guided by an expert, to explore and experiment with design, analysis, and modification of programs they might not be able to create on their own. Students can solve what seem like "real" problems by altering expert solutions. Instructors can assign homework, lab activities, and discussion topics related to real problems, and easily prepare examinations that both educate students and provide significant information about student progress.

Appendix A Outline of decisions encountered in the "Calendar" case study

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Case Study – Methods, Examples and Guide

Case Study – Methods, Examples and Guide

Table of Contents

Case Study Research

A case study is a research method that involves an in-depth examination and analysis of a particular phenomenon or case, such as an individual, organization, community, event, or situation.

It is a qualitative research approach that aims to provide a detailed and comprehensive understanding of the case being studied. Case studies typically involve multiple sources of data, including interviews, observations, documents, and artifacts, which are analyzed using various techniques, such as content analysis, thematic analysis, and grounded theory. The findings of a case study are often used to develop theories, inform policy or practice, or generate new research questions.

Types of Case Study

Types and Methods of Case Study are as follows:

Single-Case Study

A single-case study is an in-depth analysis of a single case. This type of case study is useful when the researcher wants to understand a specific phenomenon in detail.

For Example , A researcher might conduct a single-case study on a particular individual to understand their experiences with a particular health condition or a specific organization to explore their management practices. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of a single-case study are often used to generate new research questions, develop theories, or inform policy or practice.

Multiple-Case Study

A multiple-case study involves the analysis of several cases that are similar in nature. This type of case study is useful when the researcher wants to identify similarities and differences between the cases.

For Example, a researcher might conduct a multiple-case study on several companies to explore the factors that contribute to their success or failure. The researcher collects data from each case, compares and contrasts the findings, and uses various techniques to analyze the data, such as comparative analysis or pattern-matching. The findings of a multiple-case study can be used to develop theories, inform policy or practice, or generate new research questions.

Exploratory Case Study

An exploratory case study is used to explore a new or understudied phenomenon. This type of case study is useful when the researcher wants to generate hypotheses or theories about the phenomenon.

For Example, a researcher might conduct an exploratory case study on a new technology to understand its potential impact on society. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as grounded theory or content analysis. The findings of an exploratory case study can be used to generate new research questions, develop theories, or inform policy or practice.

Descriptive Case Study

A descriptive case study is used to describe a particular phenomenon in detail. This type of case study is useful when the researcher wants to provide a comprehensive account of the phenomenon.

For Example, a researcher might conduct a descriptive case study on a particular community to understand its social and economic characteristics. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of a descriptive case study can be used to inform policy or practice or generate new research questions.

Instrumental Case Study

An instrumental case study is used to understand a particular phenomenon that is instrumental in achieving a particular goal. This type of case study is useful when the researcher wants to understand the role of the phenomenon in achieving the goal.

For Example, a researcher might conduct an instrumental case study on a particular policy to understand its impact on achieving a particular goal, such as reducing poverty. The researcher collects data from multiple sources, such as interviews, observations, and documents, and uses various techniques to analyze the data, such as content analysis or thematic analysis. The findings of an instrumental case study can be used to inform policy or practice or generate new research questions.

Case Study Data Collection Methods

Here are some common data collection methods for case studies:

Interviews involve asking questions to individuals who have knowledge or experience relevant to the case study. Interviews can be structured (where the same questions are asked to all participants) or unstructured (where the interviewer follows up on the responses with further questions). Interviews can be conducted in person, over the phone, or through video conferencing.

Observations

Observations involve watching and recording the behavior and activities of individuals or groups relevant to the case study. Observations can be participant (where the researcher actively participates in the activities) or non-participant (where the researcher observes from a distance). Observations can be recorded using notes, audio or video recordings, or photographs.

Documents can be used as a source of information for case studies. Documents can include reports, memos, emails, letters, and other written materials related to the case study. Documents can be collected from the case study participants or from public sources.

Surveys involve asking a set of questions to a sample of individuals relevant to the case study. Surveys can be administered in person, over the phone, through mail or email, or online. Surveys can be used to gather information on attitudes, opinions, or behaviors related to the case study.

Artifacts are physical objects relevant to the case study. Artifacts can include tools, equipment, products, or other objects that provide insights into the case study phenomenon.

How to conduct Case Study Research

Conducting a case study research involves several steps that need to be followed to ensure the quality and rigor of the study. Here are the steps to conduct case study research:

  • Define the research questions: The first step in conducting a case study research is to define the research questions. The research questions should be specific, measurable, and relevant to the case study phenomenon under investigation.
  • Select the case: The next step is to select the case or cases to be studied. The case should be relevant to the research questions and should provide rich and diverse data that can be used to answer the research questions.
  • Collect data: Data can be collected using various methods, such as interviews, observations, documents, surveys, and artifacts. The data collection method should be selected based on the research questions and the nature of the case study phenomenon.
  • Analyze the data: The data collected from the case study should be analyzed using various techniques, such as content analysis, thematic analysis, or grounded theory. The analysis should be guided by the research questions and should aim to provide insights and conclusions relevant to the research questions.
  • Draw conclusions: The conclusions drawn from the case study should be based on the data analysis and should be relevant to the research questions. The conclusions should be supported by evidence and should be clearly stated.
  • Validate the findings: The findings of the case study should be validated by reviewing the data and the analysis with participants or other experts in the field. This helps to ensure the validity and reliability of the findings.
  • Write the report: The final step is to write the report of the case study research. The report should provide a clear description of the case study phenomenon, the research questions, the data collection methods, the data analysis, the findings, and the conclusions. The report should be written in a clear and concise manner and should follow the guidelines for academic writing.

Examples of Case Study

Here are some examples of case study research:

  • The Hawthorne Studies : Conducted between 1924 and 1932, the Hawthorne Studies were a series of case studies conducted by Elton Mayo and his colleagues to examine the impact of work environment on employee productivity. The studies were conducted at the Hawthorne Works plant of the Western Electric Company in Chicago and included interviews, observations, and experiments.
  • The Stanford Prison Experiment: Conducted in 1971, the Stanford Prison Experiment was a case study conducted by Philip Zimbardo to examine the psychological effects of power and authority. The study involved simulating a prison environment and assigning participants to the role of guards or prisoners. The study was controversial due to the ethical issues it raised.
  • The Challenger Disaster: The Challenger Disaster was a case study conducted to examine the causes of the Space Shuttle Challenger explosion in 1986. The study included interviews, observations, and analysis of data to identify the technical, organizational, and cultural factors that contributed to the disaster.
  • The Enron Scandal: The Enron Scandal was a case study conducted to examine the causes of the Enron Corporation’s bankruptcy in 2001. The study included interviews, analysis of financial data, and review of documents to identify the accounting practices, corporate culture, and ethical issues that led to the company’s downfall.
  • The Fukushima Nuclear Disaster : The Fukushima Nuclear Disaster was a case study conducted to examine the causes of the nuclear accident that occurred at the Fukushima Daiichi Nuclear Power Plant in Japan in 2011. The study included interviews, analysis of data, and review of documents to identify the technical, organizational, and cultural factors that contributed to the disaster.

Application of Case Study

Case studies have a wide range of applications across various fields and industries. Here are some examples:

Business and Management

Case studies are widely used in business and management to examine real-life situations and develop problem-solving skills. Case studies can help students and professionals to develop a deep understanding of business concepts, theories, and best practices.

Case studies are used in healthcare to examine patient care, treatment options, and outcomes. Case studies can help healthcare professionals to develop critical thinking skills, diagnose complex medical conditions, and develop effective treatment plans.

Case studies are used in education to examine teaching and learning practices. Case studies can help educators to develop effective teaching strategies, evaluate student progress, and identify areas for improvement.

Social Sciences

Case studies are widely used in social sciences to examine human behavior, social phenomena, and cultural practices. Case studies can help researchers to develop theories, test hypotheses, and gain insights into complex social issues.

Law and Ethics

Case studies are used in law and ethics to examine legal and ethical dilemmas. Case studies can help lawyers, policymakers, and ethical professionals to develop critical thinking skills, analyze complex cases, and make informed decisions.

Purpose of Case Study

The purpose of a case study is to provide a detailed analysis of a specific phenomenon, issue, or problem in its real-life context. A case study is a qualitative research method that involves the in-depth exploration and analysis of a particular case, which can be an individual, group, organization, event, or community.

The primary purpose of a case study is to generate a comprehensive and nuanced understanding of the case, including its history, context, and dynamics. Case studies can help researchers to identify and examine the underlying factors, processes, and mechanisms that contribute to the case and its outcomes. This can help to develop a more accurate and detailed understanding of the case, which can inform future research, practice, or policy.

Case studies can also serve other purposes, including:

  • Illustrating a theory or concept: Case studies can be used to illustrate and explain theoretical concepts and frameworks, providing concrete examples of how they can be applied in real-life situations.
  • Developing hypotheses: Case studies can help to generate hypotheses about the causal relationships between different factors and outcomes, which can be tested through further research.
  • Providing insight into complex issues: Case studies can provide insights into complex and multifaceted issues, which may be difficult to understand through other research methods.
  • Informing practice or policy: Case studies can be used to inform practice or policy by identifying best practices, lessons learned, or areas for improvement.

Advantages of Case Study Research

There are several advantages of case study research, including:

  • In-depth exploration: Case study research allows for a detailed exploration and analysis of a specific phenomenon, issue, or problem in its real-life context. This can provide a comprehensive understanding of the case and its dynamics, which may not be possible through other research methods.
  • Rich data: Case study research can generate rich and detailed data, including qualitative data such as interviews, observations, and documents. This can provide a nuanced understanding of the case and its complexity.
  • Holistic perspective: Case study research allows for a holistic perspective of the case, taking into account the various factors, processes, and mechanisms that contribute to the case and its outcomes. This can help to develop a more accurate and comprehensive understanding of the case.
  • Theory development: Case study research can help to develop and refine theories and concepts by providing empirical evidence and concrete examples of how they can be applied in real-life situations.
  • Practical application: Case study research can inform practice or policy by identifying best practices, lessons learned, or areas for improvement.
  • Contextualization: Case study research takes into account the specific context in which the case is situated, which can help to understand how the case is influenced by the social, cultural, and historical factors of its environment.

Limitations of Case Study Research

There are several limitations of case study research, including:

  • Limited generalizability : Case studies are typically focused on a single case or a small number of cases, which limits the generalizability of the findings. The unique characteristics of the case may not be applicable to other contexts or populations, which may limit the external validity of the research.
  • Biased sampling: Case studies may rely on purposive or convenience sampling, which can introduce bias into the sample selection process. This may limit the representativeness of the sample and the generalizability of the findings.
  • Subjectivity: Case studies rely on the interpretation of the researcher, which can introduce subjectivity into the analysis. The researcher’s own biases, assumptions, and perspectives may influence the findings, which may limit the objectivity of the research.
  • Limited control: Case studies are typically conducted in naturalistic settings, which limits the control that the researcher has over the environment and the variables being studied. This may limit the ability to establish causal relationships between variables.
  • Time-consuming: Case studies can be time-consuming to conduct, as they typically involve a detailed exploration and analysis of a specific case. This may limit the feasibility of conducting multiple case studies or conducting case studies in a timely manner.
  • Resource-intensive: Case studies may require significant resources, including time, funding, and expertise. This may limit the ability of researchers to conduct case studies in resource-constrained settings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Qualitative Research

Qualitative Research – Methods, Analysis Types...

Descriptive Research Design

Descriptive Research Design – Types, Methods and...

Qualitative Research Methods

Qualitative Research Methods

Basic Research

Basic Research – Types, Methods and Examples

Exploratory Research

Exploratory Research – Types, Methods and...

One-to-One Interview in Research

One-to-One Interview – Methods and Guide

Guide to Writing a Computer Science Case Study

First of all, we want to warn you that there is no easy way to write a professional, high quality, impressive computer science case study . It is a time-consuming task which is not easy to deal with. Our goal is to show you some shortcuts to make your time loss less significant and to support you if you are almost ready to give up on finishing this assignment at all. You should not get too discouraged, as you are not the first and not the last student dealing with this type of case study writing. Use our practical tips, and your path to the final result will be at least a little easier.

Collecting and Structuring Information

Many students start writing their case studies literally from scratch. It means that they don’t read much before they begin writing. They are afraid they won’t have enough time to finish a computer science case study, and they don’t get enough valuable information from online and offline sources for their case. Don’t repeat this mistake, as it can make your life much harder very soon. You need to do extensive research first, even if the case study in question concerns your own experience in solving a computer science problem. However, we recommend limiting the time used for gathering and structuring information, because otherwise, you can get stuck in constant browsing on the web, not writing anything.

Set a strict deadline and stick to it.

Defining the Major Question/Problem

Case studies are always about solving some problems. You can’t write a case study about some theoretical situation, without problem statement, and without explaining a solution. It can be a potential solution, it can be a description of a bad solution that causes more problems than the initial situation. In any case, there should be a problem statement and a solution. The good case study also has a part where limitations to the offered solution are explained, because there are no perfect solutions and there are always options.

Choosing and Formatting Sources

When writing a case study in computer science you need to remember that it is a modern field of study and you are supposed to use only the most recent and valid academically approved sources. Don’t talk about some ancient computer history when writing a case about a practical problem and its solution. Pouring some water in your case study is not a good choice, because when it comes to such practical assignments, anyone can see that you didn’t dig too deep. Make sure to choose only valid and recent sources and cite them using the required citation style. What is even more important is to note down all the sources you want to use from the very beginning because later you won’t have enough time to find them online and offline again. To save even more time, use citation generators online.

Take Care About Originality

It is not easy to plagiarize when writing a practical case study because mostly you write about your own experience and it is not on the web yet. However, technical plagiarizing is still possible and it can get you into trouble. To avoid it, cite everything you write when applicable and make sure you check your final draft using an advanced and sophisticated anti-plagiarism software.

Use Professional Case Study Writing Help

If you feel truly stuck with your computer science case study writing, it is a high time to address a reliable case study writing service to help you out. It is ethical, and it is efficient. We address experts in many situations in our daily life, and it is only one more problem that can be solved with the help of paid professionals. You can be an IT genius, and it doesn’t mean that you are supposed to be a perfect academic writer or are an excellent case study composer. Computer science case study writing service will prepare a quality paper based on your requirements and your vision. They will give you some practical tips on how to approach this kind of assignment in the future and you can also use a received case as our sample paper for similar tasks you will face in years to come.

what is a computer case study

Rajinder Singh

Fountainhead of Thehotskills - Web Design Inspiration & Immense Art - Leading Web Design Agency based in Chandigarh offering cutting-edge UX/UI consulting & design, custom build and SEO friendly web design & development, and, interactive digital product design services. View more posts

Leave a comment

Cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Related Posts

Content writing mistakes to avoid

May 29, 2021 by Rajinder Singh

What Are The Mistakes To Avoid When Writing Content?

Competitive Pricing and Analysis

Aug 18, 2023 by Rajinder Singh

The Guide to Competitive Pricing and Analysis: A Must-Have Tool for Today’s Entrepreneurs

Figma Design Systems

Sep 27, 2022 by Rajinder Singh

Got Stuck on Figma Design Systems? Try These Tips to Streamline Your Figma Design System Template

Best Video Animation Companies

Mar 29, 2022 by Rajinder Singh

The 10 Best Video Animation Companies Worldwide – 2022

Mastering the Smart Work Mindset

Dec 16, 2023 by Rajinder Singh

From Busy to Productive: Mastering the Smart Work Mindset

Radar Chart

Apr 14, 2022 by Rajinder Singh

Why Use a Radar Chart in Your Company?

Writing A Case Study

Barbara P

A Complete Case Study Writing Guide With Examples

Case Study

People also read

Simple Case Study Format for Students to Follow

Understand the Types of Case Study Here

Brilliant Case Study Examples and Templates For Your Help

Many writers find themselves grappling with the challenge of crafting persuasive and engaging case studies. 

The process can be overwhelming, leaving them unsure where to begin or how to structure their study effectively. And, without a clear plan, it's tough to show the value and impact in a convincing way.

But don’t worry!

In this blog, we'll guide you through a systematic process, offering step-by-step instructions on crafting a compelling case study. 

Along the way, we'll share valuable tips and illustrative examples to enhance your understanding. So, let’s get started.

Arrow Down

  • 1. What is a Case Study? 
  • 2. Types of Case Studies
  • 3. How To Write a Case Study - 9 Steps
  • 4. Case Study Methods
  • 5. Case Study Format
  • 6. Case Study Examples
  • 7. Benefits and Limitations of Case Studies

What is a Case Study? 

A case study is a detailed analysis and examination of a particular subject, situation, or phenomenon. It involves comprehensive research to gain a deep understanding of the context and variables involved. 

Typically used in academic, business, and marketing settings, case studies aim to explore real-life scenarios, providing insights into challenges, solutions, and outcomes. They serve as valuable tools for learning, decision-making, and showcasing success stories.

Order Essay

Tough Essay Due? Hire Tough Writers!

Types of Case Studies

Case studies come in various forms, each tailored to address specific objectives and areas of interest. Here are some of the main types of case studies :

  • Illustrative Case Studies: These focus on describing a particular situation or event, providing a detailed account to enhance understanding.
  • Exploratory Case Studies: Aimed at investigating an issue and generating initial insights, these studies are particularly useful when exploring new or complex topics.
  • Explanatory Case Studies: These delve into the cause-and-effect relationships within a given scenario, aiming to explain why certain outcomes occurred.
  • Intrinsic Case Studies: Concentrating on a specific case that holds intrinsic value, these studies explore the unique qualities of the subject itself.
  • Instrumental Case Studies: These are conducted to understand a broader issue and use the specific case as a means to gain insights into the larger context.
  • Collective Case Studies: Involving the study of multiple cases, this type allows for comparisons and contrasts, offering a more comprehensive view of a phenomenon or problem.

How To Write a Case Study - 9 Steps

Crafting an effective case study involves a structured approach to ensure clarity, engagement, and relevance. 

Here's a step-by-step guide on how to write a compelling case study:

Step 1: Define Your Objective

Before diving into the writing process, clearly define the purpose of your case study. Identify the key questions you want to answer and the specific goals you aim to achieve. 

Whether it's to showcase a successful project, analyze a problem, or demonstrate the effectiveness of a solution, a well-defined objective sets the foundation for a focused and impactful case study.

Step 2: Conduct Thorough Research

Gather all relevant information and data related to your chosen case. This may include interviews, surveys, documentation, and statistical data. 

Ensure that your research is comprehensive, covering all aspects of the case to provide a well-rounded and accurate portrayal. 

The more thorough your research, the stronger your case study's foundation will be.

Step 3: Introduction: Set the Stage

Begin your case study with a compelling introduction that grabs the reader's attention. Clearly state the subject and the primary issue or challenge faced. 

Engage your audience by setting the stage for the narrative, creating intrigue, and highlighting the significance of the case.

Step 4: Present the Background Information

Provide context by presenting the background information of the case. Explore relevant history, industry trends, and any other factors that contribute to a deeper understanding of the situation. 

This section sets the stage for readers, allowing them to comprehend the broader context before delving into the specifics of the case.

Step 5: Outline the Challenges Faced

Identify and articulate the challenges or problems encountered in the case. Clearly define the obstacles that needed to be overcome, emphasizing their significance. 

This section sets the stakes for your audience and prepares them for the subsequent exploration of solutions.

Step 6: Detail the Solutions Implemented

Describe the strategies, actions, or solutions applied to address the challenges outlined. Be specific about the decision-making process, the rationale behind the chosen solutions, and any alternatives considered. 

This part of the case study demonstrates problem-solving skills and showcases the effectiveness of the implemented measures.

Paper Due? Why Suffer? That's our Job!

Step 7: Showcase Measurable Results

Present tangible outcomes and results achieved as a direct consequence of the implemented solutions. Use data, metrics, and success stories to quantify the impact. 

Whether it's increased revenue, improved efficiency, or positive customer feedback, measurable results add credibility and validation to your case study.

Step 8: Include Engaging Visuals

Enhance the readability and visual appeal of your case study by incorporating relevant visuals such as charts, graphs, images, and infographics. 

Visual elements not only break up the text but also provide a clearer representation of data and key points, making your case study more engaging and accessible.

Step 9: Provide a Compelling Conclusion

Wrap up your case study with a strong and conclusive summary. Revisit the initial objectives, recap key findings, and emphasize the overall success or significance of the case. 

This section should leave a lasting impression on your readers, reinforcing the value of the presented information.

Case Study Methods

The methods employed in case study writing are diverse and flexible, catering to the unique characteristics of each case. Here are common methods used in case study writing:

Conducting one-on-one or group interviews with individuals involved in the case to gather firsthand information, perspectives, and insights.

  • Observation

Directly observing the subject or situation to collect data on behaviors, interactions, and contextual details.

  • Document Analysis

Examining existing documents, records, reports, and other written materials relevant to the case to gather information and insights.

  • Surveys and Questionnaires

Distributing structured surveys or questionnaires to relevant stakeholders to collect quantitative data on specific aspects of the case.

  • Participant Observation

Combining direct observation with active participation in the activities or events related to the case to gain an insider's perspective.

  • Triangulation

Using multiple methods (e.g., interviews, observation, and document analysis) to cross-verify and validate the findings, enhancing the study's reliability.

  • Ethnography

Immersing the researcher in the subject's environment over an extended period, focusing on understanding the cultural context and social dynamics.

Case Study Format

Effectively presenting your case study is as crucial as the content itself. Follow these formatting guidelines to ensure clarity and engagement:

  • Opt for fonts that are easy to read, such as Arial, Calibri, or Times New Roman.
  • Maintain a consistent font size, typically 12 points for the body text.
  • Aim for double-line spacing to maintain clarity and prevent overwhelming the reader with too much text.
  • Utilize bullet points to present information in a concise and easily scannable format.
  • Use numbered lists when presenting a sequence of steps or a chronological order of events.
  • Bold or italicize key phrases or important terms to draw attention to critical points.
  • Use underline sparingly, as it can sometimes be distracting in digital formats.
  • Choose the left alignment style.
  • Use hierarchy to distinguish between different levels of headings, making it easy for readers to navigate.

If you're still having trouble organizing your case study, check out this blog on case study format for helpful insights.

Case Study Examples

If you want to understand how to write a case study, examples are a fantastic way to learn. That's why we've gathered a collection of intriguing case study examples for you to review before you begin writing.

Case Study Research Example

Case Study Template

Case Study Introduction Example

Amazon Case Study Example

Business Case Study Example

APA Format Case Study Example

Psychology Case Study Example

Medical Case Study Example

UX Case Study Example

Looking for more examples? Check out our blog on case study examples for your inspiration!

Benefits and Limitations of Case Studies

Case studies are a versatile and in-depth research method, providing a nuanced understanding of complex phenomena. 

However, like any research approach, case studies come with their set of benefits and limitations. Some of them are given below:

Tips for Writing an Effective Case Study

Here are some important tips for writing a good case study:

  • Clearly articulate specific, measurable research questions aligned with your objectives.
  • Identify whether your case study is exploratory, explanatory, intrinsic, or instrumental.
  • Choose a case that aligns with your research questions, whether it involves an individual case or a group of people through multiple case studies.
  • Explore the option of conducting multiple case studies to enhance the breadth and depth of your findings.
  • Present a structured format with clear sections, ensuring readability and alignment with the type of research.
  • Clearly define the significance of the problem or challenge addressed in your case study, tying it back to your research questions.
  • Collect and include quantitative and qualitative data to support your analysis and address the identified research questions.
  • Provide sufficient detail without overwhelming your audience, ensuring a comprehensive yet concise presentation.
  • Emphasize how your findings can be practically applied to real-world situations, linking back to your research objectives.
  • Acknowledge and transparently address any limitations in your study, ensuring a comprehensive and unbiased approach.

To sum it up, creating a good case study involves careful thinking to share valuable insights and keep your audience interested. 

Stick to basics like having clear questions and understanding your research type. Choose the right case and keep things organized and balanced.

Remember, your case study should tackle a problem, use relevant data, and show how it can be applied in real life. Be honest about any limitations, and finish with a clear call-to-action to encourage further exploration.

However, if you are having issues understanding how to write a case study, it is best to hire the professionals.  Hiring a paper writing service online will ensure that you will get best grades on your essay without any stress of a deadline. 

So be sure to check out case study writing service online and stay up to the mark with your grades. 

Frequently Asked Questions

What is the purpose of a case study.

FAQ Icon

The objective of a case study is to do intensive research on a specific matter, such as individuals or communities. It's often used for academic purposes where you want the reader to know all factors involved in your subject while also understanding the processes at play.

What are the sources of a case study?

Some common sources of a case study include:

  • Archival records
  • Direct observations and encounters
  • Participant observation
  • Facts and statistics
  • Physical artifacts

What is the sample size of a case study?

A normally acceptable size of a case study is 30-50. However, the final number depends on the scope of your study and the on-ground demographic realities.

Barbara P

Dr. Barbara is a highly experienced writer and author who holds a Ph.D. degree in public health from an Ivy League school. She has worked in the medical field for many years, conducting extensive research on various health topics. Her writing has been featured in several top-tier publications.

Get Help

Paper Due? Why Suffer? That’s our Job!

Keep reading

Case Study Format

  • Is a New iPad Pro Coming Soon?
  • Get It Now: Spring Tech Deals at Amazon

What Is a Computer Case?

A case keeps the computer's internal components safe and cool

what is a computer case study

  • Emporia State University
  • The Ultimate Laptop Buying Guide

The computer case serves mainly as a way to physically mount and contain all the actual components inside a computer, like the motherboard , hard drive , optical drive , floppy disk drive , etc. They typically come bundled with a power supply .

The housing of a laptop, netbook, or tablet is also considered a case, but since they aren't purchased separately or very replaceable, the computer case tends to refer to the one that's part of a traditional desktop PC.

Some popular computer case manufacturers include CORSAIR , NZXT , Xoxide , and Antec .

The computer case is also known as a tower , box, system unit, base unit, enclosure, housing , chassis , and cabinet .

Important Computer Case Facts

Motherboards, computer cases, and power supplies all come in different sizes called form factors. All three must be compatible to work properly together.

Many computer cases, especially ones made of metal, contain very sharp edges. Be very careful when working with an open case to avoid serious cuts.

When a computer repair person says "just bring the computer in," they are typically referring to the case and what's inside it, excluding any external keyboard, mouse, monitor, or other peripherals .

Why a Computer Case Is Important

There are several reasons why we use computer cases. One is for protection, which is easy to assume because it's the most obvious. Dust, animals, toys, liquids, etc. can all damage the internal parts of a computer if the hard shell of a computer case doesn't enclose them and keep them away from the outside environment.

Do you always want to be looking at the disc drive, hard drive, motherboard, cables, power supply, and everything else that makes up the computer? Probably not. Hand-in-hand with protection, a computer case also doubles as a way to hide all those parts of the computer that nobody really wants to see each time they look in that direction.

Another good reason to use a case is to keep the computer cool . Proper airflow over the internal components is one more benefit to using a computer case. While the case has special vents to allow some of the fan air to escape, the rest of it can be used to cool down the hardware , which would otherwise get pretty hot and possibly overheat to the point of malfunction.

Keeping noisy computer parts, like the fans, in a closed space within the computer case is one way to reduce the noise they make.

The structure of the computer case is also important. The different parts can fit together and become easily accessible to the user by being compacted in a case to hold it all together. For example, USB ports and the power button are easily accessible, and the disc drive can be opened at any time.

Computer Case Description

The computer case itself can be constructed from any material that still allows the internal devices to be supported. This is usually steel, plastic, or aluminum but might instead be wood, glass, or styrofoam.

Most computer cases are rectangular and black. Case modding is the term used to describe the styling of a case to personalize it with things like custom internal lighting, paint, or a liquid cooling system.

The front of the computer case contains a power button and sometimes a reset button. Small LED lights are also typical, representing the current power status, hard drive activity , and sometimes other internal processes. These buttons and lights connect directly to the motherboard, which is secured to the inside of the case.

Cases often contain multiple 5.25-inch and 3.5-inch expansion bays for optical drives, floppy disk drives, hard drives, and other media drives. These expansion bays are located at the front of the case so that, for example, the DVD drive can be easily reached by the user when in use.

At least one side of the case, perhaps both, slide or swing open to allow access to the internal components. See our guide on opening a computer case for instructions, or see what the inside of a PC looks like .

The rear of the computer case contains small openings to fit the connectors contained on the motherboard, which is mounted inside. The power supply is also mounted just inside the back of the case, and a large opening allows for the connection of the power cord and use of the built-in fan. Fans or other cooling devices may be attached to any and all sides of the case.

Get the Latest Tech News Delivered Every Day

  • What Is a Floppy Disk Drive?
  • What Does the Inside of Your PC Look Like?
  • Do You Need an Optical Disk Drive?
  • How to Choose an External Hard Drive
  • Computer Power Supply
  • How to Fix It When Your Computer Is Making a Buzzing Noise
  • What Is a Hard Disk Drive?
  • Parallel ATA (PATA)
  • How to Manually Test a Power Supply With a Multimeter
  • How to Open a Desktop Computer Case
  • Motherboards, System Boards, & Mainboards
  • 8 Things to Consider Before Buying a Desktop PC
  • How to Choose a Motherboard: 7 Factors to Consider
  • How to Clean Your PC
  • Everything You Need to Know About Computer Hardware
  • 10 Ways to Fix It When Your Laptop Won't Turn On

How to write a case study — examples, templates, and tools

what is a computer case study

It’s a marketer’s job to communicate the effectiveness of a product or service to potential and current customers to convince them to buy and keep business moving. One of the best methods for doing this is to share success stories that are relatable to prospects and customers based on their pain points, experiences, and overall needs.

That’s where case studies come in. Case studies are an essential part of a content marketing plan. These in-depth stories of customer experiences are some of the most effective at demonstrating the value of a product or service. Yet many marketers don’t use them, whether because of their regimented formats or the process of customer involvement and approval.

A case study is a powerful tool for showcasing your hard work and the success your customer achieved. But writing a great case study can be difficult if you’ve never done it before or if it’s been a while. This guide will show you how to write an effective case study and provide real-world examples and templates that will keep readers engaged and support your business.

In this article, you’ll learn:

What is a case study?

How to write a case study, case study templates, case study examples, case study tools.

A case study is the detailed story of a customer’s experience with a product or service that demonstrates their success and often includes measurable outcomes. Case studies are used in a range of fields and for various reasons, from business to academic research. They’re especially impactful in marketing as brands work to convince and convert consumers with relatable, real-world stories of actual customer experiences.

The best case studies tell the story of a customer’s success, including the steps they took, the results they achieved, and the support they received from a brand along the way. To write a great case study, you need to:

  • Celebrate the customer and make them — not a product or service — the star of the story.
  • Craft the story with specific audiences or target segments in mind so that the story of one customer will be viewed as relatable and actionable for another customer.
  • Write copy that is easy to read and engaging so that readers will gain the insights and messages intended.
  • Follow a standardized format that includes all of the essentials a potential customer would find interesting and useful.
  • Support all of the claims for success made in the story with data in the forms of hard numbers and customer statements.

Case studies are a type of review but more in depth, aiming to show — rather than just tell — the positive experiences that customers have with a brand. Notably, 89% of consumers read reviews before deciding to buy, and 79% view case study content as part of their purchasing process. When it comes to B2B sales, 52% of buyers rank case studies as an important part of their evaluation process.

Telling a brand story through the experience of a tried-and-true customer matters. The story is relatable to potential new customers as they imagine themselves in the shoes of the company or individual featured in the case study. Showcasing previous customers can help new ones see themselves engaging with your brand in the ways that are most meaningful to them.

Besides sharing the perspective of another customer, case studies stand out from other content marketing forms because they are based on evidence. Whether pulling from client testimonials or data-driven results, case studies tend to have more impact on new business because the story contains information that is both objective (data) and subjective (customer experience) — and the brand doesn’t sound too self-promotional.

89% of consumers read reviews before buying, 79% view case studies, and 52% of B2B buyers prioritize case studies in the evaluation process.

Case studies are unique in that there’s a fairly standardized format for telling a customer’s story. But that doesn’t mean there isn’t room for creativity. It’s all about making sure that teams are clear on the goals for the case study — along with strategies for supporting content and channels — and understanding how the story fits within the framework of the company’s overall marketing goals.

Here are the basic steps to writing a good case study.

1. Identify your goal

Start by defining exactly who your case study will be designed to help. Case studies are about specific instances where a company works with a customer to achieve a goal. Identify which customers are likely to have these goals, as well as other needs the story should cover to appeal to them.

The answer is often found in one of the buyer personas that have been constructed as part of your larger marketing strategy. This can include anything from new leads generated by the marketing team to long-term customers that are being pressed for cross-sell opportunities. In all of these cases, demonstrating value through a relatable customer success story can be part of the solution to conversion.

2. Choose your client or subject

Who you highlight matters. Case studies tie brands together that might otherwise not cross paths. A writer will want to ensure that the highlighted customer aligns with their own company’s brand identity and offerings. Look for a customer with positive name recognition who has had great success with a product or service and is willing to be an advocate.

The client should also match up with the identified target audience. Whichever company or individual is selected should be a reflection of other potential customers who can see themselves in similar circumstances, having the same problems and possible solutions.

Some of the most compelling case studies feature customers who:

  • Switch from one product or service to another while naming competitors that missed the mark.
  • Experience measurable results that are relatable to others in a specific industry.
  • Represent well-known brands and recognizable names that are likely to compel action.
  • Advocate for a product or service as a champion and are well-versed in its advantages.

Whoever or whatever customer is selected, marketers must ensure they have the permission of the company involved before getting started. Some brands have strict review and approval procedures for any official marketing or promotional materials that include their name. Acquiring those approvals in advance will prevent any miscommunication or wasted effort if there is an issue with their legal or compliance teams.

3. Conduct research and compile data

Substantiating the claims made in a case study — either by the marketing team or customers themselves — adds validity to the story. To do this, include data and feedback from the client that defines what success looks like. This can be anything from demonstrating return on investment (ROI) to a specific metric the customer was striving to improve. Case studies should prove how an outcome was achieved and show tangible results that indicate to the customer that your solution is the right one.

This step could also include customer interviews. Make sure that the people being interviewed are key stakeholders in the purchase decision or deployment and use of the product or service that is being highlighted. Content writers should work off a set list of questions prepared in advance. It can be helpful to share these with the interviewees beforehand so they have time to consider and craft their responses. One of the best interview tactics to keep in mind is to ask questions where yes and no are not natural answers. This way, your subject will provide more open-ended responses that produce more meaningful content.

4. Choose the right format

There are a number of different ways to format a case study. Depending on what you hope to achieve, one style will be better than another. However, there are some common elements to include, such as:

  • An engaging headline
  • A subject and customer introduction
  • The unique challenge or challenges the customer faced
  • The solution the customer used to solve the problem
  • The results achieved
  • Data and statistics to back up claims of success
  • A strong call to action (CTA) to engage with the vendor

It’s also important to note that while case studies are traditionally written as stories, they don’t have to be in a written format. Some companies choose to get more creative with their case studies and produce multimedia content, depending on their audience and objectives. Case study formats can include traditional print stories, interactive web or social content, data-heavy infographics, professionally shot videos, podcasts, and more.

5. Write your case study

We’ll go into more detail later about how exactly to write a case study, including templates and examples. Generally speaking, though, there are a few things to keep in mind when writing your case study.

  • Be clear and concise. Readers want to get to the point of the story quickly and easily, and they’ll be looking to see themselves reflected in the story right from the start.
  • Provide a big picture. Always make sure to explain who the client is, their goals, and how they achieved success in a short introduction to engage the reader.
  • Construct a clear narrative. Stick to the story from the perspective of the customer and what they needed to solve instead of just listing product features or benefits.
  • Leverage graphics. Incorporating infographics, charts, and sidebars can be a more engaging and eye-catching way to share key statistics and data in readable ways.
  • Offer the right amount of detail. Most case studies are one or two pages with clear sections that a reader can skim to find the information most important to them.
  • Include data to support claims. Show real results — both facts and figures and customer quotes — to demonstrate credibility and prove the solution works.

6. Promote your story

Marketers have a number of options for distribution of a freshly minted case study. Many brands choose to publish case studies on their website and post them on social media. This can help support SEO and organic content strategies while also boosting company credibility and trust as visitors see that other businesses have used the product or service.

Marketers are always looking for quality content they can use for lead generation. Consider offering a case study as gated content behind a form on a landing page or as an offer in an email message. One great way to do this is to summarize the content and tease the full story available for download after the user takes an action.

Sales teams can also leverage case studies, so be sure they are aware that the assets exist once they’re published. Especially when it comes to larger B2B sales, companies often ask for examples of similar customer challenges that have been solved.

Now that you’ve learned a bit about case studies and what they should include, you may be wondering how to start creating great customer story content. Here are a couple of templates you can use to structure your case study.

Template 1 — Challenge-solution-result format

  • Start with an engaging title. This should be fewer than 70 characters long for SEO best practices. One of the best ways to approach the title is to include the customer’s name and a hint at the challenge they overcame in the end.
  • Create an introduction. Lead with an explanation as to who the customer is, the need they had, and the opportunity they found with a specific product or solution. Writers can also suggest the success the customer experienced with the solution they chose.
  • Present the challenge. This should be several paragraphs long and explain the problem the customer faced and the issues they were trying to solve. Details should tie into the company’s products and services naturally. This section needs to be the most relatable to the reader so they can picture themselves in a similar situation.
  • Share the solution. Explain which product or service offered was the ideal fit for the customer and why. Feel free to delve into their experience setting up, purchasing, and onboarding the solution.
  • Explain the results. Demonstrate the impact of the solution they chose by backing up their positive experience with data. Fill in with customer quotes and tangible, measurable results that show the effect of their choice.
  • Ask for action. Include a CTA at the end of the case study that invites readers to reach out for more information, try a demo, or learn more — to nurture them further in the marketing pipeline. What you ask of the reader should tie directly into the goals that were established for the case study in the first place.

Template 2 — Data-driven format

  • Start with an engaging title. Be sure to include a statistic or data point in the first 70 characters. Again, it’s best to include the customer’s name as part of the title.
  • Create an overview. Share the customer’s background and a short version of the challenge they faced. Present the reason a particular product or service was chosen, and feel free to include quotes from the customer about their selection process.
  • Present data point 1. Isolate the first metric that the customer used to define success and explain how the product or solution helped to achieve this goal. Provide data points and quotes to substantiate the claim that success was achieved.
  • Present data point 2. Isolate the second metric that the customer used to define success and explain what the product or solution did to achieve this goal. Provide data points and quotes to substantiate the claim that success was achieved.
  • Present data point 3. Isolate the final metric that the customer used to define success and explain what the product or solution did to achieve this goal. Provide data points and quotes to substantiate the claim that success was achieved.
  • Summarize the results. Reiterate the fact that the customer was able to achieve success thanks to a specific product or service. Include quotes and statements that reflect customer satisfaction and suggest they plan to continue using the solution.
  • Ask for action. Include a CTA at the end of the case study that asks readers to reach out for more information, try a demo, or learn more — to further nurture them in the marketing pipeline. Again, remember that this is where marketers can look to convert their content into action with the customer.

While templates are helpful, seeing a case study in action can also be a great way to learn. Here are some examples of how Adobe customers have experienced success.

Juniper Networks

One example is the Adobe and Juniper Networks case study , which puts the reader in the customer’s shoes. The beginning of the story quickly orients the reader so that they know exactly who the article is about and what they were trying to achieve. Solutions are outlined in a way that shows Adobe Experience Manager is the best choice and a natural fit for the customer. Along the way, quotes from the client are incorporated to help add validity to the statements. The results in the case study are conveyed with clear evidence of scale and volume using tangible data.

A Lenovo case study showing statistics, a pull quote and featured headshot, the headline "The customer is king.," and Adobe product links.

The story of Lenovo’s journey with Adobe is one that spans years of planning, implementation, and rollout. The Lenovo case study does a great job of consolidating all of this into a relatable journey that other enterprise organizations can see themselves taking, despite the project size. This case study also features descriptive headers and compelling visual elements that engage the reader and strengthen the content.

Tata Consulting

When it comes to using data to show customer results, this case study does an excellent job of conveying details and numbers in an easy-to-digest manner. Bullet points at the start break up the content while also helping the reader understand exactly what the case study will be about. Tata Consulting used Adobe to deliver elevated, engaging content experiences for a large telecommunications client of its own — an objective that’s relatable for a lot of companies.

Case studies are a vital tool for any marketing team as they enable you to demonstrate the value of your company’s products and services to others. They help marketers do their job and add credibility to a brand trying to promote its solutions by using the experiences and stories of real customers.

When you’re ready to get started with a case study:

  • Think about a few goals you’d like to accomplish with your content.
  • Make a list of successful clients that would be strong candidates for a case study.
  • Reach out to the client to get their approval and conduct an interview.
  • Gather the data to present an engaging and effective customer story.

Adobe can help

There are several Adobe products that can help you craft compelling case studies. Adobe Experience Platform helps you collect data and deliver great customer experiences across every channel. Once you’ve created your case studies, Experience Platform will help you deliver the right information to the right customer at the right time for maximum impact.

To learn more, watch the Adobe Experience Platform story .

Keep in mind that the best case studies are backed by data. That’s where Adobe Real-Time Customer Data Platform and Adobe Analytics come into play. With Real-Time CDP, you can gather the data you need to build a great case study and target specific customers to deliver the content to the right audience at the perfect moment.

Watch the Real-Time CDP overview video to learn more.

Finally, Adobe Analytics turns real-time data into real-time insights. It helps your business collect and synthesize data from multiple platforms to make more informed decisions and create the best case study possible.

Request a demo to learn more about Adobe Analytics.

https://business.adobe.com/blog/perspectives/b2b-ecommerce-10-case-studies-inspire-you

https://business.adobe.com/blog/basics/business-case

https://business.adobe.com/blog/basics/what-is-real-time-analytics

  • IEEE CS Standards
  • Career Center
  • Subscribe to Newsletter
  • IEEE Standards

what is a computer case study

  • For Industry Professionals
  • For Students
  • Launch a New Career
  • Membership FAQ
  • Membership FAQs
  • Membership Grades
  • Special Circumstances
  • Discounts & Payments
  • Distinguished Contributor Recognition
  • Grant Programs
  • Find a Local Chapter
  • Find a Distinguished Visitor
  • Find a Speaker on Early Career Topics
  • Technical Communities
  • Collabratec (Discussion Forum)
  • Start a Chapter
  • My Subscriptions
  • My Referrals
  • Computer Magazine
  • ComputingEdge Magazine
  • Let us help make your event a success. EXPLORE PLANNING SERVICES
  • Events Calendar
  • Calls for Papers
  • Conference Proceedings
  • Conference Highlights
  • Top 2024 Conferences
  • Conference Sponsorship Options
  • Conference Planning Services
  • Conference Organizer Resources
  • Virtual Conference Guide
  • Get a Quote
  • CPS Dashboard
  • CPS Author FAQ
  • CPS Organizer FAQ
  • Find the latest in advanced computing research. VISIT THE DIGITAL LIBRARY
  • Open Access
  • Tech News Blog
  • Author Guidelines
  • Reviewer Information
  • Guest Editor Information
  • Editor Information
  • Editor-in-Chief Information
  • Volunteer Opportunities
  • Video Library
  • Member Benefits
  • Institutional Library Subscriptions
  • Advertising and Sponsorship
  • Code of Ethics
  • Educational Webinars
  • Online Education
  • Certifications
  • Industry Webinars & Whitepapers
  • Research Reports
  • Bodies of Knowledge
  • CS for Industry Professionals
  • Resource Library
  • Newsletters
  • Women in Computing
  • Digital Library Access
  • Organize a Conference
  • Run a Publication
  • Become a Distinguished Speaker
  • Participate in Standards Activities
  • Peer Review Content
  • Author Resources
  • Publish Open Access
  • Society Leadership
  • Boards & Committees
  • Local Chapters
  • Governance Resources
  • Conference Publishing Services
  • Chapter Resources
  • About the Board of Governors
  • Board of Governors Members
  • Diversity & Inclusion
  • Open Volunteer Opportunities
  • Award Recipients
  • Student Scholarships & Awards
  • Nominate an Election Candidate
  • Nominate a Colleague
  • Corporate Partnerships
  • Conference Sponsorships & Exhibits
  • Advertising
  • Recruitment
  • Publications
  • Education & Career

CiSE Case Studies in Translational Computer Science

Call for department articles.

CiSE ‘s newest department explores how findings in fundamental research in computer, computational, and data science translate to technologies, solutions, or practice for the benefit of science, engineering, and society. Specifically, each department article will highlight impactful translational research examples in which research has successfully moved from the laboratory to the field and into the community. The goal is to improve understanding of underlying approaches, explore challenges and lessons learned, with the overarching aim to formulate translational research processes that are broadly applicable.

Computing and data are increasingly essential to the research process across all areas of science and engineering and are key catalysts for impactful advances and breakthroughs. Consequently, translating fundamental advances in computer, computational, and data science help to ensure that these emerging insights, discoveries, and innovations are realized.  

Translational Research in Computer and Computational Sciences [1][2] refers the bridging of foundational and use-inspired (applied) research with the delivery and deployment of its outcomes to the target community, and supports bi-directional benefit in which delivery and deployment process informs the research. 

Call for Department Contributions: We seek short papers that align with our recommended structure and detail the following aspects of the described research:

  • Overview: A description of the research, what problem does it address, who is the target user community, what are the key innovations and attributes, etc.
  • Translation Process: What was the process used to move the research from the laboratory to the application? How were outcomes fed back into the research, and over what time period did this occur? How was the translation supported? 
  • I mpact: What is the impact of the translated research, both on the CCDS research as well as the target domain(s)? 
  • Lessons Learned: What are the lessons learned in terms of both the research and the translation process? What were the challenges faced?
  • Conclusion: Based on your experience, do you have suggestions for processes or support structures that would have made the translation more effective?

CiSE Department articles are typically up to 3,000 words (including abstract, references, author biographies, and tables/figures [which count as 250 words each]), and are only reviewed by the department editors.

To pitch or submit a department article, please contact the editors directly by emailing:

  • Manish Parashar  
  • David Abramson  

Additional information for authors can be found here.

  • D. Abramson and M. Parashar, “Translational Research in Computer Science,” Computer , vol. 52, no. 9, pp. 16-23, Sept. 2019, doi: 10.1109/MC.2019.2925650.
  • D. Abramson, M. Parashar, and P. Arzberger. “Translation computer science – Overview of the special issue,” J. Computational Sci. , 2020, ISSN 1877-7503, https://www.sciencedirect.com/journal/journal-of-computational-science/special-issue/10P6T48JS7B.

Recommended by IEEE Computer Society

what is a computer case study

IEEE Computer Society Announces Call for Proposals for 2024 Emerging Tech Grants Program

what is a computer case study

Standards in Computer Science: What, Why, & How

what is a computer case study

Why the Best Technology Leaders Are Broadening Their Skill Sets

what is a computer case study

Why is Data Normalization Important?

what is a computer case study

Let's Have Fun Programming

what is a computer case study

Fostering Excellence: A Conversation with Willy Zwaenepoel, Harry H. Goode Memorial Award Recipient

what is a computer case study

From Code Readability to Reusability, Here’s How Terraform Locals Enhance IaC Management

what is a computer case study

Cutting Cloud Costs: Key Strategies to Keep Budgets in Check

Top 8 Computer Vision Use Cases and Examples in 2024

what is a computer case study

Computer vision (CV) refers to replicating the human visual ability in a computer or machine. Computer vision technology is being intensively used in many sectors, including retail , agriculture , manufacturing , etc. The global computer vision market is projected to surpass $41 billion by 2030 (See Figure 1).

Figure 1. Computer vision market growth by component 2020 & 2030.

what is a computer case study

As investments in computer vision rise, business leaders must learn more about the technology and how it can be implemented in their business.

This article explores the top 8 use cases of computer vision in different industries and examples.

In the past, doctors spent hours analyzing medical data to identify diseases and make diagnoses. As the number of patients and the demand increases , speed is something the healthcare sector needs. Now medical data analysis has become faster and more accurate through computer vision technology.

Medical image analysis

A computer vision-based system can accurately analyze x-rays, CT scans, MRIs, and other medical images to identify abnormalities such as tumors, blood clots, etc., that are not visible to the human eye. Since there is currently a shortage of radiologists in the market, computer vision can be a way to overcome this issue.

Cancer Detection

A computer vision system can also detect skin cancers by analyzing images of skin abnormalities. Similarly, computer vision and AI-enabled systems can accurately analyze mammograms for breast cancer detection.

To learn more, check out our article on computer vision use cases in the healthcare sector.

You can check our list of medical data annotation tools to find the option that best suit your CV project needs.

Manufacturing

The global computer vision market in the manufacturing sector is growing as more businesses understand the benefits of the technology. Here are some use cases:

Quality control

Quality control is one of the most common applications of CV in manufacturing. Since it is a repetitive and error-prone task, manual quality control processes are inefficient when implemented in large-scale factories. Computer vision QA systems can work with much higher speed and accuracy and save manufacturing facilities time and money.

This is how it works:

Facility automation

Computer vision is also used in automating different functions such as assembly of product parts and worker safety monitoring in a manufacturing facility.

Many production facilities in the automotive industry use computer vision-enabled assembly bots to assemble products with higher speed and accuracy.

See how BMW uses computer vision and AI to detect car models on its assembly line:

To learn more about computer vision use cases in the manufacturing sector, check out this quick read .

New innovations in the retail sector, such as cashier-less stores and smart surveillance systems, are all enabled by computer vision. Here is how:

Cashier-less stores

During the global pandemic, retail stores became high-risk areas for customers and employees. Computer vision and artificial intelligence allow automation in retail stores and can help overcome these risks. 

After the initial launch of Amazon’s cashier-less convenience stores Amazon Go , many other businesses, such as San Jose University, adopted this idea by implementing computer vision-enabled systems in their convenience stores.

Smart stores surveillance

Monitoring shelves and customers at retail stores can be a tedious task if done manually. Computer vision-enabled cameras can:

  • Monitor store shelves
  • Monitor each product on the shelves
  • Examine inventory levels and alert for replenishment
  • Monitor customer movements in the store to identify hot spots
  • Analyze customers for potential shoplifting and theft

To learn more about computer vision in the retail sector, check out our comprehensive article.

Transportation

Computer vision technology is revolutionizing the transportation industry. Here are some use cases:

Autonomous vehicles

The global autonomous vehicle market is projected to reach $62 billion by 2026. Computer vision is the core technology that allows autonomous vehicles to work.

Figure 2. A simple computer vision self-driving car system

what is a computer case study

Self-driving vehicles are also being used to automate the logistics sector to increase efficiency and reduce lead times in road logistics. 

See how self-driving trucks could eliminate the driver shortage issue in the logistics industry.

Road traffic analysis

Computer vision systems are also used to analyze traffic conditions through cameras mounted on roads. The data from the computer vision system can be input into urban traffic management systems to help optimize traffic flow in the city.

The German city Darmstadt uses a computer vision-enabled traffic management system to improve its traffic flow.

Further reading

You can also read:

  • Medical Annotation: What It Is, Benefits, and Use Cases
  • Top 4 Computer Vision Challenges & Solutions
  • Synthetic Data for Computer Vision: Benefits & Case Studies
  • Computer Vision Consulting: Benefits & How to Choose a Vendor
  • Video Annotation: In-depth guide and Use Cases
  • Document Annotation: In-depth Guide and Use Cases
  • A Guide to Video Annotation Tools and Types

If you have any questions, feel free to contact us:

what is a computer case study

Next to Read

Top 5 computer vision use cases in automotive in 2024, top 5 use cases of computer vision in manufacturing in 2024, top 7 computer vision use cases in healthcare in 2024.

Your email address will not be published. All fields are required.

Related research

Amazon Web Services Alternatives with focus on Compute & AI in '24

Amazon Web Services Alternatives with focus on Compute & AI in '24

Top 5 Use Cases of Computer Vision in Retail in 2024

Top 5 Use Cases of Computer Vision in Retail in 2024

Think Reliability Logo

  • About Cause Mapping®
  • What is Root Cause Analysis?
  • Cause Mapping® Method
  • Cause Mapping® FAQs
  • Why ThinkReliability?
  • Online Workshops
  • Online Short Courses
  • On-Demand Training Catalog
  • On-Demand Training Subscription
  • Company Case Study
  • Upcoming Webinars
  • Webinar Archives
  • Public Workshops
  • Private Workshops
  • Cause Mapping Certified Facilitator Program
  • Our Services
  • Facilitation, Consulting, and Coaching
  • Root Cause Analysis Program Development
  • Work Process Reliability™
  • Cause Mapping® Template
  • Root Cause Analysis Examples
  • Video Library
  • Articles and Downloads
  • About ThinkReliability
  • Client List
  • Testimonials

blog-morris-worm

Case Study: The Morris Worm Brings Down the Internet

In 1988, Robert Morris created and released the first computer worm which significantly disrupted the young internet and served as a wakeup call on the importance of cybersecurity. Read our root cause analysis example to learn more about this disaster and the lessons that can be learned from it.

On November 3, 1988, Robert Morris, a graduate student at Cornell, created and released the first computer worm that could spread between computers and copy itself. Morris didn’t have malicious intent and his worm appears to have been more the result of intellectual curiosity rather than a purposefully destructive cyber-attack, but an error in the program led to it propagating much faster than he intended. The worm significantly disrupted the young internet, introduced the world to the concept of a software worm and served as a wakeup call on the importance of cybersecurity.

Build a Cause Map

A Cause Map, a visual root cause analysis, can be used to create a root cause analysis case study and analyze this incident. A Cause Map is built by asking “why” questions and using the answers to visually lay out the causes that contributed to an issue to intuitively show the cause-and-effect relationships . Mapping out all the causes that contributed to an issues ensures that all facets of a problem are well understood and helps facilitate the development of effective, detailed solutions that can be implemented to reduce the risk of a similar issues in the future.

Known flaws

To create his worm, Morris exploited known software bugs and weak passwords that no one had worried about enough to fix. At the time the Morris worm was released, the internet was in its infancy and only used by academics. There was no commercial traffic on the internet, and websites did not exist. Only a small, elite group had access to the internet, so concerns about cybersecurity hadn’t really come up.

What went wrong

Morris was trying to build a harmless worm to highlight security flaws, but an error in the program led to the worm causing a significant amount of disruption. The worm was intended to infect each computer one time, but the worm was designed to duplicate itself every seventh time a computer indicated it had already been infected to make the worm more difficult to remove. The problem was that the speed of propagation was underestimated. Once released, the worm quickly reinfected computers over and over again until they were unable to function, and the internet came crashing down.

The worm did more damage than Morris had expected and once he realized what he had done, he asked a colleague to anonymously apologize for the worm and explain how to update computers to prevent it from spreading. But the warning came too late to prevent massive disruption.

Impacts of the Morris Worm

In the short term, The Morris worm created a mess that took many computer experts days to clean up. One of the lasting impacts from the Morris worm that is hard to quantify, but is the most significant consequence of this incident, is the impact on cybersecurity. If the first “hacker” had malicious intent and came a little later, it's likely that the damage would have been much more severe. The Morris worm highlighted the need to consider cybersecurity relatively early in the development of the internet.

The Morris worm also had a significant impact on its creator, Robert Morris, who became the first person to be indicted under the 1986 Computer Fraud and Abuse Act. He was hit with a $10,050 fine, 400 hours of community service and a three-year probation. After this initial hiccup, Morris went on to have a successful career and now works in the MIT Computer Science and Artificial Intelligence Laboratory.

Download a copy of our Cause Map of the incident. 

blog-morris worm-thumbnail

Share This Post With A Friend

Share on Facebook

Similar Posts

Other resources.

  • Root Cause Analysis blog archive
  • Patient Safety blog archive

Facilitate Better Investigations | Attend a Webinar

READ BY - - - - - - - - - -

3m-boxed.png

Other Resources - - - - - - - - - -

what is a computer case study

Sign Up For Our eNewsletter

2023 case study

what is a computer case study

  • 1 Introduction
  • 2 The case study
  • 3 Every term in the case study
  • 4 Markscheme for case study
  • 5 Previous years case studies
  • 6 References

Introduction [ edit ]

Higher-level students must write 3 papers. The case study is the third paper. Every year, the case study discusses a different topic. Students must become very very familiar with the case study . The IB recommends spending about a year studying this guide.

This page will help you organize and understand the 2023 case study . Here are some external resources:

  • I have found a wonderful resource here, which may help you .
  • There is another good collection of resources here
  • here is an reddit with some links

The case study [ edit ]

Click here for the full pdf case study for 2023

Every term in the case study [ edit ]

  • Please visit our programming page to see a list of terms involved in Machine learning .

Markscheme for case study [ edit ]

MarkBands.png

Previous years case studies [ edit ]

  • Click here for the 2022 case study
  • Click here for the 2020 and 2021 case study
  • Click here for the 2019 case study
  • Click here for the 2018 case study
  • Click here for the 2017 case study
  • Click here for the 2016 case study

References [ edit ]

  • ↑ http://www.flaticon.com/

Devote time and attention to gaining knowledge of (an academic subject), especially by means of books

Give a sequence of brief answers with no explanation.

Simply Coding

Computer Networks

Computer networking: case study questions.

  • Categories Computer Networks , Networking , Computer Networks

what is a computer case study

This post contains case study questions on Computer Networking.

Case study 1:.

Web server is a special computer system running on HTTP through web pages. The web page is a medium to carry data from one computer system to another. The working of the webserver starts from the client or user. The client sends their request through the web browser to the webserver. Web server takes this request, processes it and then sends back processed data to the client. The server gathers all of our web page information and sends it to the user, which we see on our computer system in the form of a web page. When the client sends a request for processing to the web server, a domain name and IP address are important to the webserver. The domain name and IP address are used to identify the user on a large network.

  • IP addresses
  • Computer systems
  • Webpages of a site
  • A medium to carry data from one computer to another
  • Home address
  • Domain name
  • Both b and c
  • Hypertext Transfer Protocol
  • Hypertext Transfer Procedure
  • Hyperlink Transfer Protocol
  • Hyperlink Transfer Procedure
  • Domain name system
  • Routing information protocol
  • Network time protocol
  • None of the above
  • Domain Name Security
  • Domain Number System
  • Document Name System
  • Domain Name System

Case Study 2:

In mid 80’s another federal agency, the NSF created a new high capacity network called NSFnet, which was more capable than ARPANET. The only drawback of NSFnet was that it allowed only academic research on its network and not any kind of private business on it. Now, several private organisations and people started working to build their own networks, named private networks, which were later (in 1990’s) connected with ARPANET and NSFnet to form the Internet. The Internet really became popular in 1990’s after the development of World Wide Web.

  • National Senior Foundation Network
  • National Science Framework Network
  • National Science Foundation Network
  • National Science Formation Network
  • Advanced Research Premium Agency NETwork
  • Advanced Research Projects Agency NETwork
  • Advanced Review Projects Agency NETwork
  • Advanced Research Protection Agency NETwork
  • A single network
  • A vast collection of different networks
  • Interconnection of local area networks
  • Interconnection of wide area networks
  • Internet architecture board
  • Internet society
  • Internet service provider
  • Different computer
  • Leased line
  • Digital subscriber line
  • Digital signal line
  • Digital leased line

Case Study 3:

TCP/IP, or the Transmission Control Protocol/Internet Protocol, is a suite of communication protocols used to interconnect network devices on the internet. TCP/IP can also be used as a communications protocol in a private computer network (an intranet or an extranet).

TCP defines how applications can create channels of communication across a network. It also manages how a message is assembled into smaller packets before they are then transmitted over the internet and reassembled in the right order at the destination address.

IP defines how to address and route each packet to make sure it reaches the right destination. Each gateway computer on the network checks this IP address to determine where to forward the message. TCP/IP uses the client-server model of communication in which a user or machine (a client) is provided a service (like sending a webpage) by another computer (a server) in the network. Collectively, the TCP/IP suite of protocols is classified as stateless, which means each client request is considered new because it is unrelated to previous requests. Being stateless frees up network paths so they can be used continuously.

  • All of the above
  • Remote procedure call
  • Internet relay chat
  • Resource reservation protocol
  • Local procedure call
  • communication between computers on a network
  • metropolitan communication
  • sesson layer
  • transport layer
  • network layer
  • data link layer

Case Study 4:

A blog is a publication of personal views, thoughts, and experience on web links. It is a kind of personal diary note about an individual. The contents published on a blog are organized in a reverse manner, it means recent posts appear first and the older posts are further downwards.

Blogger – a person who posts a blog in the form of text, audio, video, weblinks, etc is known as a blogger. Bloggers have followers who follow them to get instant messages post by the blogger.

In most cases, celebrities, business tycoons, famous politicians, social workers, speakers, etc are the successful blogger because people follow them to know about their success stories and ideas.

  • social networking
  • social networking sites
  • e-commerce websites
  • search engines
  • entertainment sites
  • social network
  • entertainment
  • search engine
  • none of these

Which of the following is an example of micro-blogging?

Which of the following is not used as blogging platform?

  • discussion boards

Case Study 5:

An email is a service of sending or receiving emails or messages in the form of text, audio, video, etc over the internet. Various service providers are providing email services to users. The most popular service providers in India are Gmail, Yahoo, Hotmail, Rediff, etc.

An email address for an email account is a unique ID. This email ID is used to send and receive mails over the Internet. Each email address has two primary components: username and domain name. The username comes first, followed by the @) symbol and then the domain name.

  • none of the above

Which of the following is the correct format of email address?

  • name@website@info
  • [email protected]
  • www.nameofwebsite.com
  • name.website.com
  • multipurpose internet mail extensions
  • multipurpose internet mail email
  • multipurpose internet mail end
  • multipurpose internet mail extra
  • mail server
  • user agents

NVT stands for

  • network virtual transmission
  • network virtual test
  • network virtual terminal
  • network virtual tell

Case study 6:

In 1989, Tim Berners Lee, a researcher, proposed the idea of World Wide Web). Tim Berners Lee and his team are credited with inventing Hyper Text Transfer Protocol (HTTP), HTML and the technology for a web server and a web browser. Using hyperlinks embedded in hypertext the web developers were able to connect web pages. They could design attractive webpages containing text, sound and graphics. This change witnessed a massive expansion of the Internet in the 1990s.

  • A program that can display a webpage
  • A program used to view HTML documents
  • It enables a user to access the resources of internet
  • a) is same every time whenever it displays
  • b) generates on demand by a program or a request from browser
  • c) both is same every time whenever it displays and generates on demand by a program or a request from browser
  • d) is different always in a predefined order
  • a) unique reference label
  • b) uniform reference label
  • c) uniform resource locator
  • d) unique resource locator
  • a) asynchronous javascript and xml
  • b) advanced JSP and xml
  • c) asynchronous JSP and xml
  • d) advanced javascript and xml
  • a) convention for representing and interacting with objects in html documents
  • b) application programming interface
  • c) hierarchy of objects in ASP.NET
  • d) scripting language
  • a) VBScript
  • a) sent from a website and stored in user’s web browser while a user is browsing a website
  • b) sent from user and stored in the server while a user is browsing a website
  • c) sent from root server to all servers
  • d) sent from the root server to other root servers

Case study 7:

E-business, commonly known as electronic or online business is a business where an online transaction takes place. In this transaction process, the buyer and the seller do not engage personally, but the sale happens through the internet. In 1996, Intel’s marketing and internet team coined the term “E-business

E-Commerce stands for electronic commerce and is a process through which an individual can buy, sell, deal, order and pay for the products and services over the internet. In this kind of transaction, the seller does not have to face the buyer to communicate. Few examples of e-commerce are online shopping, online ticket booking, online banking, social networking, etc.

  • doing business
  • sale of goods
  • doing business electronically
  • all of the above

Which of the following is not a major type of e-commerce?

  • consolidation
  • preservation
  • reinvention

The primary source of financing during the early years of e-commerce was _______

  • large retail films
  • venture capital funds
  • initial public offerings
  • small products
  • digital products
  • specialty products
  • fresh products
  • value proposition
  • competitive advantage
  • market strategy
  • universal standards

Case study 8:

Due to the rapid rise of the internet and digitization, Governments all over the world are initiating steps to involve IT in all governmental processes. This is the concept of e-government. This is to ensure that the Govt. administration becomes a swifter and more transparent process. It also helps saves huge costs.

E-Group is a feature provided by many social network services which helps you create, post, comment to and read from their “own interest” and “niche-specific forums”, often over a virtual network. “Groups” create a smaller network within a larger network and the users of the social network services can create, join, leave and report groups accordingly. “Groups” are maintained by “owners, moderators, or managers”, who can edit posts to “discussion threads” and “regulate member behavior” within the group.

  • can be defined as the “application of e-commerce technologies to government and public services .”
  • is the same as internet governance
  • can be defined as “increasing the participation in internet use by socially excluded groups”
  • Individuals in society
  • computer networks
  • Tax Deduction Account Number
  • Tax Deduction and Collection Account Number
  • Taxable Account Number
  • Tax Account Number
  • who conduct seminars
  • who get together on weekends
  • who have regular video conferences
  • having the ability to access and contribute to forum topics

Case study 9:

Coursera has partnered with museums, universities, and other institutions to offer students free classes on an astounding variety of topics. Students can browse the list of available topics or simply answer the question “What would you like to learn about?”, then when they answer that question they are led to a list of available courses on that topic. Students who are nervous about getting in over their heads can relax.

  • Mobile Online Open Courses
  • Massive Online Open Courses
  • Mobile Open Online Courses
  • Massive Open Online Courses
  • Blended learning
  • Distance learning
  • Synchronous learning
  • Asynchronous learning
  • Induction to the company for new employees
  • Microsoft excel training
  • Team-building exercise
  • Building your assertiveness skills at work
  • Learners using technology in a classroom environment lead by a tutor
  • Training course done by youtube tutorials
  • An online learning environment accessed through the internet (i.e. webinars)
  • An online learning course
  • MasterClass
  • SimplyCoding

Case study 10:

Search Engines allow us to filter the tons of information available on the internet and get the most accurate results. And while most people don’t pay too much attention to search engines, they immensely contribute to the accuracy of results and the experience you enjoy while scouring through the internet.

Besides being the most popular search engine covering over 90% of the worldwide market, Google boasts outstanding features that make it the best search engine in the market. It boasts cutting-edge algorithms, easy-to-use interface, and personalized user experience. The platform is renowned for continually  updating its search engine  results and features to give users the best experience.

  • Software systems that are designed to search for information on the world wide web 
  • Used to search documents
  • Used to search videos
  • Single word
  • Search engine pages
  • Search engine result pages
  • Web crawler
  • Web indexer
  • Web organizer
  • Web manager
  • Ink directory
  • Search optimizer
  • Generating cached files
  • Affecting the visibility
  • Getting meta tags
  • All of these

author avatar

Previous post

Insertion Sort

Mobile telecommunication technologies, you may also like.

switching Techniques

Switching Techniques used in computer network

Slide1

Communication Channels

Slide1

Leave A Reply Cancel reply

You must be logged in to post a comment.

Login with your site account

Remember Me

Not a member yet? Register now

Register a new account

Are you a member? Login now

A generative AI reset: Rewiring to turn potential into value in 2024

It’s time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI’s enormous potential value is harder than expected .

With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI transformations: competitive advantage comes from building organizational and technological capabilities to broadly innovate, deploy, and improve solutions at scale—in effect, rewiring the business  for distributed digital and AI innovation.

About QuantumBlack, AI by McKinsey

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

Companies looking to score early wins with gen AI should move quickly. But those hoping that gen AI offers a shortcut past the tough—and necessary—organizational surgery are likely to meet with disappointing results. Launching pilots is (relatively) easy; getting pilots to scale and create meaningful value is hard because they require a broad set of changes to the way work actually gets done.

Let’s briefly look at what this has meant for one Pacific region telecommunications company. The company hired a chief data and AI officer with a mandate to “enable the organization to create value with data and AI.” The chief data and AI officer worked with the business to develop the strategic vision and implement the road map for the use cases. After a scan of domains (that is, customer journeys or functions) and use case opportunities across the enterprise, leadership prioritized the home-servicing/maintenance domain to pilot and then scale as part of a larger sequencing of initiatives. They targeted, in particular, the development of a gen AI tool to help dispatchers and service operators better predict the types of calls and parts needed when servicing homes.

Leadership put in place cross-functional product teams with shared objectives and incentives to build the gen AI tool. As part of an effort to upskill the entire enterprise to better work with data and gen AI tools, they also set up a data and AI academy, which the dispatchers and service operators enrolled in as part of their training. To provide the technology and data underpinnings for gen AI, the chief data and AI officer also selected a large language model (LLM) and cloud provider that could meet the needs of the domain as well as serve other parts of the enterprise. The chief data and AI officer also oversaw the implementation of a data architecture so that the clean and reliable data (including service histories and inventory databases) needed to build the gen AI tool could be delivered quickly and responsibly.

Never just tech

Creating value beyond the hype

Let’s deliver on the promise of technology from strategy to scale.

Our book Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI (Wiley, June 2023) provides a detailed manual on the six capabilities needed to deliver the kind of broad change that harnesses digital and AI technology. In this article, we will explore how to extend each of those capabilities to implement a successful gen AI program at scale. While recognizing that these are still early days and that there is much more to learn, our experience has shown that breaking open the gen AI opportunity requires companies to rewire how they work in the following ways.

Figure out where gen AI copilots can give you a real competitive advantage

The broad excitement around gen AI and its relative ease of use has led to a burst of experimentation across organizations. Most of these initiatives, however, won’t generate a competitive advantage. One bank, for example, bought tens of thousands of GitHub Copilot licenses, but since it didn’t have a clear sense of how to work with the technology, progress was slow. Another unfocused effort we often see is when companies move to incorporate gen AI into their customer service capabilities. Customer service is a commodity capability, not part of the core business, for most companies. While gen AI might help with productivity in such cases, it won’t create a competitive advantage.

To create competitive advantage, companies should first understand the difference between being a “taker” (a user of available tools, often via APIs and subscription services), a “shaper” (an integrator of available models with proprietary data), and a “maker” (a builder of LLMs). For now, the maker approach is too expensive for most companies, so the sweet spot for businesses is implementing a taker model for productivity improvements while building shaper applications for competitive advantage.

Much of gen AI’s near-term value is closely tied to its ability to help people do their current jobs better. In this way, gen AI tools act as copilots that work side by side with an employee, creating an initial block of code that a developer can adapt, for example, or drafting a requisition order for a new part that a maintenance worker in the field can review and submit (see sidebar “Copilot examples across three generative AI archetypes”). This means companies should be focusing on where copilot technology can have the biggest impact on their priority programs.

Copilot examples across three generative AI archetypes

  • “Taker” copilots help real estate customers sift through property options and find the most promising one, write code for a developer, and summarize investor transcripts.
  • “Shaper” copilots provide recommendations to sales reps for upselling customers by connecting generative AI tools to customer relationship management systems, financial systems, and customer behavior histories; create virtual assistants to personalize treatments for patients; and recommend solutions for maintenance workers based on historical data.
  • “Maker” copilots are foundation models that lab scientists at pharmaceutical companies can use to find and test new and better drugs more quickly.

Some industrial companies, for example, have identified maintenance as a critical domain for their business. Reviewing maintenance reports and spending time with workers on the front lines can help determine where a gen AI copilot could make a big difference, such as in identifying issues with equipment failures quickly and early on. A gen AI copilot can also help identify root causes of truck breakdowns and recommend resolutions much more quickly than usual, as well as act as an ongoing source for best practices or standard operating procedures.

The challenge with copilots is figuring out how to generate revenue from increased productivity. In the case of customer service centers, for example, companies can stop recruiting new agents and use attrition to potentially achieve real financial gains. Defining the plans for how to generate revenue from the increased productivity up front, therefore, is crucial to capturing the value.

Upskill the talent you have but be clear about the gen-AI-specific skills you need

By now, most companies have a decent understanding of the technical gen AI skills they need, such as model fine-tuning, vector database administration, prompt engineering, and context engineering. In many cases, these are skills that you can train your existing workforce to develop. Those with existing AI and machine learning (ML) capabilities have a strong head start. Data engineers, for example, can learn multimodal processing and vector database management, MLOps (ML operations) engineers can extend their skills to LLMOps (LLM operations), and data scientists can develop prompt engineering, bias detection, and fine-tuning skills.

A sample of new generative AI skills needed

The following are examples of new skills needed for the successful deployment of generative AI tools:

  • data scientist:
  • prompt engineering
  • in-context learning
  • bias detection
  • pattern identification
  • reinforcement learning from human feedback
  • hyperparameter/large language model fine-tuning; transfer learning
  • data engineer:
  • data wrangling and data warehousing
  • data pipeline construction
  • multimodal processing
  • vector database management

The learning process can take two to three months to get to a decent level of competence because of the complexities in learning what various LLMs can and can’t do and how best to use them. The coders need to gain experience building software, testing, and validating answers, for example. It took one financial-services company three months to train its best data scientists to a high level of competence. While courses and documentation are available—many LLM providers have boot camps for developers—we have found that the most effective way to build capabilities at scale is through apprenticeship, training people to then train others, and building communities of practitioners. Rotating experts through teams to train others, scheduling regular sessions for people to share learnings, and hosting biweekly documentation review sessions are practices that have proven successful in building communities of practitioners (see sidebar “A sample of new generative AI skills needed”).

It’s important to bear in mind that successful gen AI skills are about more than coding proficiency. Our experience in developing our own gen AI platform, Lilli , showed us that the best gen AI technical talent has design skills to uncover where to focus solutions, contextual understanding to ensure the most relevant and high-quality answers are generated, collaboration skills to work well with knowledge experts (to test and validate answers and develop an appropriate curation approach), strong forensic skills to figure out causes of breakdowns (is the issue the data, the interpretation of the user’s intent, the quality of metadata on embeddings, or something else?), and anticipation skills to conceive of and plan for possible outcomes and to put the right kind of tracking into their code. A pure coder who doesn’t intrinsically have these skills may not be as useful a team member.

While current upskilling is largely based on a “learn on the job” approach, we see a rapid market emerging for people who have learned these skills over the past year. That skill growth is moving quickly. GitHub reported that developers were working on gen AI projects “in big numbers,” and that 65,000 public gen AI projects were created on its platform in 2023—a jump of almost 250 percent over the previous year. If your company is just starting its gen AI journey, you could consider hiring two or three senior engineers who have built a gen AI shaper product for their companies. This could greatly accelerate your efforts.

Form a centralized team to establish standards that enable responsible scaling

To ensure that all parts of the business can scale gen AI capabilities, centralizing competencies is a natural first move. The critical focus for this central team will be to develop and put in place protocols and standards to support scale, ensuring that teams can access models while also minimizing risk and containing costs. The team’s work could include, for example, procuring models and prescribing ways to access them, developing standards for data readiness, setting up approved prompt libraries, and allocating resources.

While developing Lilli, our team had its mind on scale when it created an open plug-in architecture and setting standards for how APIs should function and be built.  They developed standardized tooling and infrastructure where teams could securely experiment and access a GPT LLM , a gateway with preapproved APIs that teams could access, and a self-serve developer portal. Our goal is that this approach, over time, can help shift “Lilli as a product” (that a handful of teams use to build specific solutions) to “Lilli as a platform” (that teams across the enterprise can access to build other products).

For teams developing gen AI solutions, squad composition will be similar to AI teams but with data engineers and data scientists with gen AI experience and more contributors from risk management, compliance, and legal functions. The general idea of staffing squads with resources that are federated from the different expertise areas will not change, but the skill composition of a gen-AI-intensive squad will.

Set up the technology architecture to scale

Building a gen AI model is often relatively straightforward, but making it fully operational at scale is a different matter entirely. We’ve seen engineers build a basic chatbot in a week, but releasing a stable, accurate, and compliant version that scales can take four months. That’s why, our experience shows, the actual model costs may be less than 10 to 15 percent of the total costs of the solution.

Building for scale doesn’t mean building a new technology architecture. But it does mean focusing on a few core decisions that simplify and speed up processes without breaking the bank. Three such decisions stand out:

  • Focus on reusing your technology. Reusing code can increase the development speed of gen AI use cases by 30 to 50 percent. One good approach is simply creating a source for approved tools, code, and components. A financial-services company, for example, created a library of production-grade tools, which had been approved by both the security and legal teams, and made them available in a library for teams to use. More important is taking the time to identify and build those capabilities that are common across the most priority use cases. The same financial-services company, for example, identified three components that could be reused for more than 100 identified use cases. By building those first, they were able to generate a significant portion of the code base for all the identified use cases—essentially giving every application a big head start.
  • Focus the architecture on enabling efficient connections between gen AI models and internal systems. For gen AI models to work effectively in the shaper archetype, they need access to a business’s data and applications. Advances in integration and orchestration frameworks have significantly reduced the effort required to make those connections. But laying out what those integrations are and how to enable them is critical to ensure these models work efficiently and to avoid the complexity that creates technical debt  (the “tax” a company pays in terms of time and resources needed to redress existing technology issues). Chief information officers and chief technology officers can define reference architectures and integration standards for their organizations. Key elements should include a model hub, which contains trained and approved models that can be provisioned on demand; standard APIs that act as bridges connecting gen AI models to applications or data; and context management and caching, which speed up processing by providing models with relevant information from enterprise data sources.
  • Build up your testing and quality assurance capabilities. Our own experience building Lilli taught us to prioritize testing over development. Our team invested in not only developing testing protocols for each stage of development but also aligning the entire team so that, for example, it was clear who specifically needed to sign off on each stage of the process. This slowed down initial development but sped up the overall delivery pace and quality by cutting back on errors and the time needed to fix mistakes.

Ensure data quality and focus on unstructured data to fuel your models

The ability of a business to generate and scale value from gen AI models will depend on how well it takes advantage of its own data. As with technology, targeted upgrades to existing data architecture  are needed to maximize the future strategic benefits of gen AI:

  • Be targeted in ramping up your data quality and data augmentation efforts. While data quality has always been an important issue, the scale and scope of data that gen AI models can use—especially unstructured data—has made this issue much more consequential. For this reason, it’s critical to get the data foundations right, from clarifying decision rights to defining clear data processes to establishing taxonomies so models can access the data they need. The companies that do this well tie their data quality and augmentation efforts to the specific AI/gen AI application and use case—you don’t need this data foundation to extend to every corner of the enterprise. This could mean, for example, developing a new data repository for all equipment specifications and reported issues to better support maintenance copilot applications.
  • Understand what value is locked into your unstructured data. Most organizations have traditionally focused their data efforts on structured data (values that can be organized in tables, such as prices and features). But the real value from LLMs comes from their ability to work with unstructured data (for example, PowerPoint slides, videos, and text). Companies can map out which unstructured data sources are most valuable and establish metadata tagging standards so models can process the data and teams can find what they need (tagging is particularly important to help companies remove data from models as well, if necessary). Be creative in thinking about data opportunities. Some companies, for example, are interviewing senior employees as they retire and feeding that captured institutional knowledge into an LLM to help improve their copilot performance.
  • Optimize to lower costs at scale. There is often as much as a tenfold difference between what companies pay for data and what they could be paying if they optimized their data infrastructure and underlying costs. This issue often stems from companies scaling their proofs of concept without optimizing their data approach. Two costs generally stand out. One is storage costs arising from companies uploading terabytes of data into the cloud and wanting that data available 24/7. In practice, companies rarely need more than 10 percent of their data to have that level of availability, and accessing the rest over a 24- or 48-hour period is a much cheaper option. The other costs relate to computation with models that require on-call access to thousands of processors to run. This is especially the case when companies are building their own models (the maker archetype) but also when they are using pretrained models and running them with their own data and use cases (the shaper archetype). Companies could take a close look at how they can optimize computation costs on cloud platforms—for instance, putting some models in a queue to run when processors aren’t being used (such as when Americans go to bed and consumption of computing services like Netflix decreases) is a much cheaper option.

Build trust and reusability to drive adoption and scale

Because many people have concerns about gen AI, the bar on explaining how these tools work is much higher than for most solutions. People who use the tools want to know how they work, not just what they do. So it’s important to invest extra time and money to build trust by ensuring model accuracy and making it easy to check answers.

One insurance company, for example, created a gen AI tool to help manage claims. As part of the tool, it listed all the guardrails that had been put in place, and for each answer provided a link to the sentence or page of the relevant policy documents. The company also used an LLM to generate many variations of the same question to ensure answer consistency. These steps, among others, were critical to helping end users build trust in the tool.

Part of the training for maintenance teams using a gen AI tool should be to help them understand the limitations of models and how best to get the right answers. That includes teaching workers strategies to get to the best answer as fast as possible by starting with broad questions then narrowing them down. This provides the model with more context, and it also helps remove any bias of the people who might think they know the answer already. Having model interfaces that look and feel the same as existing tools also helps users feel less pressured to learn something new each time a new application is introduced.

Getting to scale means that businesses will need to stop building one-off solutions that are hard to use for other similar use cases. One global energy and materials company, for example, has established ease of reuse as a key requirement for all gen AI models, and has found in early iterations that 50 to 60 percent of its components can be reused. This means setting standards for developing gen AI assets (for example, prompts and context) that can be easily reused for other cases.

While many of the risk issues relating to gen AI are evolutions of discussions that were already brewing—for instance, data privacy, security, bias risk, job displacement, and intellectual property protection—gen AI has greatly expanded that risk landscape. Just 21 percent of companies reporting AI adoption say they have established policies governing employees’ use of gen AI technologies.

Similarly, a set of tests for AI/gen AI solutions should be established to demonstrate that data privacy, debiasing, and intellectual property protection are respected. Some organizations, in fact, are proposing to release models accompanied with documentation that details their performance characteristics. Documenting your decisions and rationales can be particularly helpful in conversations with regulators.

In some ways, this article is premature—so much is changing that we’ll likely have a profoundly different understanding of gen AI and its capabilities in a year’s time. But the core truths of finding value and driving change will still apply. How well companies have learned those lessons may largely determine how successful they’ll be in capturing that value.

Eric Lamarre

The authors wish to thank Michael Chui, Juan Couto, Ben Ellencweig, Josh Gartner, Bryce Hall, Holger Harreis, Phil Hudelson, Suzana Iacob, Sid Kamath, Neerav Kingsland, Kitti Lakner, Robert Levin, Matej Macak, Lapo Mori, Alex Peluffo, Aldo Rosales, Erik Roth, Abdul Wahab Shaikh, and Stephen Xu for their contributions to this article.

This article was edited by Barr Seitz, an editorial director in the New York office.

Explore a career with us

Related articles.

Light dots and lines evolve into a pattern of a human face and continue to stream off the the side in a moving grid pattern.

The economic potential of generative AI: The next productivity frontier

A yellow wire shaped into a butterfly

Rewired to outcompete

A digital construction of a human face consisting of blocks

Meet Lilli, our generative AI tool that’s a researcher, a time saver, and an inspiration

Universities Have a Computer-Science Problem

The case for teaching coders to speak French

Photo of college students working at their computers as part of a hackathon at Berkeley in 2018

Listen to this article

Produced by ElevenLabs and News Over Audio (NOA) using AI narration.

Last year, 18 percent of Stanford University seniors graduated with a degree in computer science, more than double the proportion of just a decade earlier. Over the same period at MIT, that rate went up from 23 percent to 42 percent . These increases are common everywhere: The average number of undergraduate CS majors at universities in the U.S. and Canada tripled in the decade after 2005, and it keeps growing . Students’ interest in CS is intellectual—culture moves through computation these days—but it is also professional. Young people hope to access the wealth, power, and influence of the technology sector.

That ambition has created both enormous administrative strain and a competition for prestige. At Washington University in St. Louis, where I serve on the faculty of the Computer Science & Engineering department, each semester brings another set of waitlists for enrollment in CS classes. On many campuses, students may choose to study computer science at any of several different academic outposts, strewn throughout various departments. At MIT, for example, they might get a degree in “Urban Studies and Planning With Computer Science” from the School of Architecture, or one in “Mathematics With Computer Science” from the School of Science, or they might choose from among four CS-related fields within the School of Engineering. This seepage of computing throughout the university has helped address students’ booming interest, but it also serves to bolster their demand.

Another approach has gained in popularity. Universities are consolidating the formal study of CS into a new administrative structure: the college of computing. MIT opened one in 2019. Cornell set one up in 2020. And just last year, UC Berkeley announced that its own would be that university’s first new college in more than half a century. The importance of this trend—its significance for the practice of education, and also of technology—must not be overlooked. Universities are conservative institutions, steeped in tradition. When they elevate computing to the status of a college, with departments and a budget, they are declaring it a higher-order domain of knowledge and practice, akin to law or engineering. That decision will inform a fundamental question: whether computing ought to be seen as a superfield that lords over all others, or just a servant of other domains, subordinated to their interests and control. This is, by no happenstance, also the basic question about computing in our society writ large.

When I was an undergraduate at the University of Southern California in the 1990s, students interested in computer science could choose between two different majors: one offered by the College of Letters, Arts and Sciences, and one from the School of Engineering. The two degrees were similar, but many students picked the latter because it didn’t require three semesters’ worth of study of a (human) language, such as French. I chose the former, because I like French.

An American university is organized like this, into divisions that are sometimes called colleges , and sometimes schools . These typically enjoy a good deal of independence to define their courses of study and requirements as well as research practices for their constituent disciplines. Included in this purview: whether a CS student really needs to learn French.

The positioning of computer science at USC was not uncommon at the time. The first academic departments of CS had arisen in the early 1960s, and they typically evolved in one of two ways: as an offshoot of electrical engineering (where transistors got their start), housed in a college of engineering; or as an offshoot of mathematics (where formal logic lived), housed in a college of the arts and sciences. At some universities, including USC, CS found its way into both places at once.

The contexts in which CS matured had an impact on its nature, values, and aspirations. Engineering schools are traditionally the venue for a family of professional disciplines, regulated with licensure requirements for practice. Civil engineers, mechanical engineers, nuclear engineers, and others are tasked to build infrastructure that humankind relies on, and they are expected to solve problems. The liberal-arts field of mathematics, by contrast, is concerned with theory and abstraction. The relationship between the theoretical computer scientists in mathematics and the applied ones in engineers is a little like the relationship between biologists and doctors, or physicists and bridge builders. Keeping applied and pure versions of a discipline separate allows each to focus on its expertise, but limits the degree to which one can learn from the other.

Read: Programmers, stop calling yourself engineers

By the time I arrived at USC, some universities had already started down a different path. In 1988, Carnegie Mellon University created what it says was one of the first dedicated schools of computer science. Georgia Institute of Technology followed two years later. “Computing was going to be a big deal,” says Charles Isbell, a former dean of Georgia Tech’s college of computing and now the provost at the University of Wisconsin-Madison. Emancipating the field from its prior home within the college of engineering gave it room to grow, he told me. Within a decade, Georgia Tech had used this structure to establish new research and teaching efforts in computer graphics, human-computer interaction, and robotics. (I spent 17 years on the faculty there, working for Isbell and his predecessors, and teaching computational media.)

Kavita Bala, Cornell University’s dean of computing, told me that the autonomy and scale of a college allows her to avoid jockeying for influence and resources. MIT’s computing dean, Daniel Huttenlocher, says that computing’s breakneck pace of innovation makes independence necessary. It would be held back in an arts-and-sciences context, he told me, or even an engineering one.

But the computing industry isn’t just fast-moving. It’s also reckless. Technology tycoons say they need space for growth, and warn that too much oversight will stifle innovation. Yet we might all be better off, in certain ways, if their ambitions were held back even just a little. Instead of operating with a deep understanding or respect for law, policy, justice, health, or cohesion, tech firms tend to do whatever they want . Facebook sought growth at all costs, even if its take on connecting people tore society apart . If colleges of computing serve to isolate young, future tech professionals from any classrooms where they might imbibe another school’s culture and values—engineering’s studied prudence, for example, or the humanities’ focus on deliberation—this tendency might only worsen.

Read: The moral failure of computer scientists

When I raised this concern with Isbell, he said that the same reasoning could apply to any influential discipline, including medicine and business. He’s probably right, but that’s cold comfort. The mere fact that universities allow some other powerful fiefdoms to exist doesn’t make computing’s centralization less concerning. Isbell admitted that setting up colleges of computing “absolutely runs the risk” of empowering a generation of professionals who may already be disengaged from consequences to train the next one in their image. Inside a computing college, there may be fewer critics around who can slow down bad ideas. Disengagement might redouble. But he said that dedicated colleges could also have the opposite effect. A traditional CS department in a school of engineering would be populated entirely by computer scientists, while the faculty for a college of computing like the one he led at Georgia Tech might also house lawyers, ethnographers, psychologists, and even philosophers like me. Bala told me that her college was established not to teach CS on its own but to incorporate policy, law, sociology, and other fields into its practice. “I think there are no downsides,” she said.

Mark Guzdial is a former faculty member in Georgia Tech’s computing college, and he now teaches computer science in the University of Michigan’s College of Engineering. At Michigan, CS wasn’t always housed in engineering—Guzdial says it started out inside the philosophy department, as part of the College of Literature, Science and the Arts. Now that college “wants it back,” as one administrator told Guzdial. Having been asked to start a program that teaches computing to liberal-arts students, Guzdial has a new perspective on these administrative structures. He learned that Michigan’s Computer Science and Engineering program and its faculty are “despised” by their counterparts in the humanities and social sciences. “They’re seen as arrogant, narrowly focused on machines rather than people, and unwilling to meet other programs’ needs,” he told me. “I had faculty refuse to talk to me because I was from CSE.”

In other words, there may be downsides just to placing CS within an engineering school, let alone making it an independent college. Left entirely to themselves, computer scientists can forget that computers are supposed to be tools that help people. Georgia Tech’s College of Computing worked “because the culture was always outward-looking. We sought to use computing to solve others’ problems,” Guzdial said. But that may have been a momentary success. Now, at Michigan, he is trying to rebuild computing education from scratch, for students in fields such as French and sociology. He wants them to understand it as a means of self-expression or achieving justice—and not just a way of making software, or money.

Early in my undergraduate career, I decided to abandon CS as a major. Even as an undergraduate, I already had a side job in what would become the internet industry, and computer science, as an academic field, felt theoretical and unnecessary. Reasoning that I could easily get a job as a computer professional no matter what it said on my degree, I decided to study other things while I had the chance.

I have a strong memory of processing the paperwork to drop my computer-science major in college, in favor of philosophy. I walked down a quiet, blue-tiled hallway of the engineering building. All the faculty doors were closed, although the click-click of mechanical keyboards could be heard behind many of them. I knocked on my adviser’s door; she opened it, silently signed my paperwork without inviting me in, and closed the door again. The keyboard tapping resumed.

The whole experience was a product of its time, when computer science was a field composed of oddball characters, working by themselves, and largely disconnected from what was happening in the world at large. Almost 30 years later, their projects have turned into the infrastructure of our daily lives. Want to find a job? That’s LinkedIn. Keep in touch? Gmail, or Instagram. Get news? A website like this one, we hope, but perhaps TikTok. My university uses a software service sold by a tech company to run its courses. Some things have been made easier with computing. Others have been changed to serve another end, like scaling up an online business.

Read: So much for ‘learn to code’

The struggle to figure out the best organizational structure for computing education is, in a way, a microcosm of the struggle under way in the computing sector at large. For decades, computers were tools used to accomplish tasks better and more efficiently. Then computing became the way we work and live. It became our culture, and we began doing what computers made possible, rather than using computers to solve problems defined outside their purview. Tech moguls became famous, wealthy, and powerful. So did CS academics (relatively speaking). The success of the latter—in terms of rising student enrollments, research output, and fundraising dollars—both sustains and justifies their growing influence on campus.

If computing colleges have erred, it may be in failing to exert their power with even greater zeal. For all their talk of growth and expansion within academia, the computing deans’ ambitions seem remarkably modest. Martial Hebert, the dean of Carnegie Mellon’s computing school, almost sounded like he was talking about the liberal arts when he told me that CS is “a rich tapestry of disciplines” that “goes far beyond computers and coding.” But the seven departments in his school correspond to the traditional, core aspects of computing plus computational biology. They do not include history, for example, or finance. Bala and Isbell talked about incorporating law, policy, and psychology into their programs of study, but only in the form of hiring individual professors into more traditional CS divisions. None of the deans I spoke with aspires to launch, say, a department of art within their college of computing, or one of politics, sociology, or film. Their vision does not reflect the idea that computing can or should be a superordinate realm of scholarship, on the order of the arts or engineering. Rather, they are proceeding as though it were a technical school for producing a certain variety of very well-paid professionals. A computing college deserving of the name wouldn’t just provide deeper coursework in CS and its closely adjacent fields; it would expand and reinvent other, seemingly remote disciplines for the age of computation.

Near the end of our conversation, Isbell mentioned the engineering fallacy, which he summarized like this: Someone asks you to solve a problem, and you solve it without asking if it’s a problem worth solving. I used to think computing education might be stuck in a nesting-doll version of the engineer’s fallacy, in which CS departments have been asked to train more software engineers without considering whether more software engineers are really what the world needs. Now I worry that they have a bigger problem to address: how to make computer people care about everything else as much as they care about computers.

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

For inquiries related to this message please contact our support team and provide the reference ID below.

Help | Advanced Search

Computer Science > Computation and Language

Title: enhancing llm factual accuracy with rag to counter hallucinations: a case study on domain-specific queries in private knowledge-bases.

Abstract: We proposed an end-to-end system design towards utilizing Retrieval Augmented Generation (RAG) to improve the factual accuracy of Large Language Models (LLMs) for domain-specific and time-sensitive queries related to private knowledge-bases. Our system integrates RAG pipeline with upstream datasets processing and downstream performance evaluation. Addressing the challenge of LLM hallucinations, we finetune models with a curated dataset which originates from CMU's extensive resources and annotated with the teacher model. Our experiments demonstrate the system's effectiveness in generating more accurate answers to domain-specific and time-sensitive inquiries. The results also revealed the limitations of fine-tuning LLMs with small-scale and skewed datasets. This research highlights the potential of RAG systems in augmenting LLMs with external datasets for improved performance in knowledge-intensive tasks. Our code and models are available on Github.

Submission history

Access paper:.

  • Download PDF
  • HTML (experimental)
  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. Computer Case Definition And Function

    what is a computer case study

  2. Wonder What Is a Case Study? We Can Help you With That!

    what is a computer case study

  3. Case Study

    what is a computer case study

  4. Computer Case Study 1

    what is a computer case study

  5. What is a Case Study: A Detailed Guide With Examples

    what is a computer case study

  6. A Case Study of Computer Systems

    what is a computer case study

VIDEO

  1. sec1 part1 computer organization

  2. Studying Computer Science in Germany

  3. Computer Labs

  4. Should you study Computer Engineering?

  5. A-one Company Institute is a Best Computer Institute

  6. #education #computertechnology

COMMENTS

  1. A Case Study on Computer Programs

    Suggested Citation:"12 A Case Study on Computer Programs."National Research Council. 1993. Global Dimensions of Intellectual Property Rights in Science and Technology.Washington, DC: The National Academies Press. doi: 10.17226/2054.

  2. Case Study Research in Software Engineering—It is a Case, and it is a

    In case study research, researchers work with participants in the study, while in action research, researchers are involved in the study. The focus here is on evaluating the use of the label "case study" in relation to primarily contemporary phenomena and real-life context. A contemporary phenomenon is here defined as occurring at present ...

  3. Guidelines for conducting and reporting case study research ...

    Case study is a suitable research methodology for software engineering research since it studies contemporary phenomena in its natural context. However, the understanding of what constitutes a case study varies, and hence the quality of the resulting studies. This paper aims at providing an introduction to case study methodology and guidelines for researchers conducting case studies and ...

  4. Computer Aided Software Engineering (CASE)

    Computer-aided software engineering (CASE) is the implementation of computer-facilitated tools and methods in software development. CASE is used to ensure high-quality and defect-free software. CASE ensures a check-pointed and disciplined approach and helps designers, developers, testers, managers, and others to see the project milestones during development.

  5. What Is a Case Study?

    A case study is a detailed study of a specific subject, such as a person, group, place, event, organization, or phenomenon. Case studies are commonly used in social, educational, clinical, and business research. A case study research design usually involves qualitative methods, but quantitative methods are sometimes also used.

  6. Programming case studies

    Programming case studies. A case study consists of a programming problem, one or more solutions to the problem, and a narrative description of the process used by an expert to produce the solutions.. Rationale for case studies and ideas for using them in a programming course are laid out in the following papers: "The Case for Case Studies of Programming Problems", Marcia C. Linn and ...

  7. What is a Case Study? Definition & Examples

    A case study is an in-depth investigation of a single person, group, event, or community. This research method involves intensively analyzing a subject to understand its complexity and context. The richness of a case study comes from its ability to capture detailed, qualitative data that can offer insights into a process or subject matter that ...

  8. Case Studies in the Classroom

    Case studies improve computer programming courses by emphasizing the process rather than the product of problem solving. This paper describes specific uses of case studies in freshman and sophmore-level courses. ... What is a Case Study? A cast study describes a programming problem, the process used by an expert to solve the problem, and one or ...

  9. Case Study

    A case study is a qualitative research method that involves the in-depth exploration and analysis of a particular case, which can be an individual, group, organization, event, or community. The primary purpose of a case study is to generate a comprehensive and nuanced understanding of the case, including its history, context, and dynamics. Case ...

  10. Case Study Method

    1 Introduction. The case study method is a learning technique in which the student is faced a particular problem, the case. The case study facilitates the exploration of a real issue within a defined context, using a variety of data sources (Baxter et al., 2008 ). In general terms, the case study analyzes a defined problem consisting in a real ...

  11. Programming case study: Encouraging cross-disciplinary projects

    Programming case study: Encouraging cross-disciplinary projects. Google Classroom. To give fellow teachers an idea for how they can teach our curriculum in a classroom setting, we are creating case studies. Here's one case study of how teacher Ellen Reller uses our curriculum in her classroom in Lowell High School in California.

  12. Guide to Writing a Computer Science Case Study

    When writing a case study in computer science you need to remember that it is a modern field of study and you are supposed to use only the most recent and valid academically approved sources. Don't talk about some ancient computer history when writing a case about a practical problem and its solution. Pouring some water in your case study is ...

  13. What is a Case Study

    Remember, your case study should tackle a problem, use relevant data, and show how it can be applied in real life. Be honest about any limitations, and finish with a clear call-to-action to encourage further exploration. However, if you are having issues understanding how to write a case study, it is best to hire the professionals.

  14. Computer Programming for All: A Case-Study in Product ...

    Taking design education as a case study, we approach these questions by asking faculty from design schools in different countries whether their respective institutions offer any computer programming courses and in which modality. ... WCETR- 2014 Computer programming for all: a case-study in product design education. Giovanni J. Contrerasa*, Kin ...

  15. What Is a Case? (Computer Case, Tower, Chassis)

    A case keeps the computer's internal components safe and cool. By. Tim Fisher. Updated on June 19, 2023. The computer case serves mainly as a way to physically mount and contain all the actual components inside a computer, like the motherboard, hard drive, optical drive, floppy disk drive, etc. They typically come bundled with a power supply .

  16. How to write a case study

    Case study formats can include traditional print stories, interactive web or social content, data-heavy infographics, professionally shot videos, podcasts, and more. 5. Write your case study. We'll go into more detail later about how exactly to write a case study, including templates and examples. Generally speaking, though, there are a few ...

  17. How to Create a Case Study + 14 Case Study Templates

    A case study's outcome is typically to share the story of a company's growth or highlight the increase of metrics the company tracks to understand success. The case study includes an analysis of a campaign or project that goes through a few steps from identifying the problem to how you implemented the solution. How to Write a Case Study

  18. CiSE Case Studies in Translational Computer Science

    Call for Department Articles . CiSE's newest department explores how findings in fundamental research in computer, computational, and data science translate to technologies, solutions, or practice for the benefit of science, engineering, and society.Specifically, each department article will highlight impactful translational research examples in which research has successfully moved from the ...

  19. Top 8 Computer Vision Use Cases and Examples in 2024

    Computer vision technology is being intensively used in many sectors, including retail, agriculture, manufacturing, etc. The global computer vision market is projected to surpass $41 billion by 2030 (See Figure 1). Figure 1. Computer vision market growth by component 2020 & 2030. Source: Allied market research.

  20. Case Study: The Morris Worm Brings Down the Internet

    The problem was that the speed of propagation was underestimated. Once released, the worm quickly reinfected computers over and over again until they were unable to function, and the internet came crashing down. The worm did more damage than Morris had expected and once he realized what he had done, he asked a colleague to anonymously apologize ...

  21. 2023 case study

    The case study is the third paper. Every year, the case study discusses a different topic. Students must become very very familiar with the case study. The IB recommends spending about a year studying this guide. This page will help you organize and understand the 2023 case study. Here are some external resources:

  22. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  23. Computer Networking: Case Study Questions

    Case Study 1: Web server is a special computer system running on HTTP through web pages. The web page is a medium to carry data from one computer system to another. The working of the webserver starts from the client or user. The client sends their request through the web browser to the webserver. Web server takes this request, processes it and ...

  24. A generative AI reset: Rewiring to turn potential into value in 2024

    It's time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI's enormous potential value is harder than expected.. With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI ...

  25. Universities Have a Computer-Science Problem

    The case for teaching coders to speak French. Last year, 18 percent of Stanford University seniors graduated with a degree in computer science, more than double the proportion of just a decade ...

  26. Apple Car's Crash: Design Details, Tim Cook's Indecision, Failed Tesla

    The inside story is a case study in indecision. ... driving entirely on its own using a revolutionary onboard computer, a new operating system and cloud software developed in-house. There would be ...

  27. [2403.10446] Enhancing LLM Factual Accuracy with RAG to Counter

    Computer Science > Computation and Language. arXiv:2403.10446 (cs) [Submitted on 15 Mar 2024] Title: Enhancing LLM Factual Accuracy with RAG to Counter Hallucinations: A Case Study on Domain-Specific Queries in Private Knowledge-Bases. Authors: Jiarui Li, Ye Yuan, Zehua Zhang.