• No category

IB cs case study 2024

ib computer science may 2024 case study

Related documents

Employee emergency preparedness skills survey

Add this document to collection(s)

You can add this document to your study collection(s)

Add this document to saved

You can add this document to your saved list

Suggest us how to improve StudyLib

(For complaints, use another form )

Input it if you want to receive answer

Paper 3 Practice Papers + Guide + Slides (2024)

ib computer science may 2024 case study

This guide includes all of the content that you need to know for Paper 3 of the IB Computer Science HL exam. In addition, 2 practice papers with sample answers are included, written by an experienced IB Computer Science teacher.

Also included are the slides from the Youtube video on this topic.

Note: This corresponds to the 2024 Case Study: Rescue Robot.

Click here to see a preview.

Study Guide, 2 Practice Papers w/ Answers, Slides

External Assessment — Paper 3 #

Paper 3 asks a number of questions related to a pre-released case study .

Here is the case study for use in May and November 2024

Case studies from other years .

The maximum number of marks you can get for Paper 3 is 30. Your Paper 3 score translates into 20% of your final HL grade, see grade boundaries .

Grade boundaries #

Computer science course has a variety of assessment components. Paper 3 is marked using markschemes and markbands and assigned a numerical mark by the external examiner. Grade boundaries are then applied to determine the overall grade on the 1-7 scale for this component.

These boundaries have no impact on your final grade. However, they may be used to estimate the difficulty of the component.

Higher Level #

International Baccalaureate Computer Science

This page is part of the hockerill computing website

Introduction and aims

Useful links

Syllabus component (teaching hours) [ahl/sl/option], • topic 1: system fundamentals (20 hours) [sl] topic 2: computer organization (6 hours) [sl] topic 3: networks (9 hours) [sl] topic 4: computational thinking, problem-solving and programming (45 hours) [sl] topic 5: abstract data structures (23 hours) [ahl] topic 6: resource management (8 hours) [ahl] topic 7: control (14 hours) [ahl], case study (30) [ahl].

Additional subject content introduced by the annually issued case study

Case study for 2024 exam: Rescue robots

Students study one of the following options (30), at AHL the option is studied in greater depth (+15) Option A: Databases Option B: Modelling and simulation Option C: Web science Option D: Object-oriented programming (OOP)

Internal assessment (30) [SL]

Practical application of skills through the development of a product and associated documentation

Group 4 project (10) [SL]

Computer science

Computer science previously formed a subject in group 5 of the Diploma Programme curriculum but now lies within group 4. As such, it is regarded as a science, alongside biology, chemistry, design technology, physics, environmental systems and societies and sports, exercise and health science.

 This group change is significant as it means DP students can now select computer science as their group 4 subject rather than having to select it in addition to mathematics as was previously the case. 

The IB computer science course is a rigorous and practical problem-solving discipline. Features and benefits of the curriculum and assessment of are as follows: 

Learn more about computer science in a DP workshop for teachers . 

Computer science subject brief

Subject briefs are short two-page documents providing an outline of the course. Read the standard level (SL) and/or higher level (HL) subject brief below. 

ib computer science may 2024 case study

We use cookies on this site. By continuing to use this website, you consent to our use of these cookies.   Read more about cookies

IB CompSci Hub

For HL students only , the third exam involves doing research on a topic that is released by the IBO every year.

Here is what the moderator suggests in preparation for this exam:

Higher Level Paper 3 is a paper that demands significant research on the part of the candidate , guided, of course, by the class teacher. When it comes to answering questions, the focus throughout the paper is on the depth of understanding of the subject material. This is not a paper that can be answered successfully with general knowledge acquired through brief encounters with the material, but only through a well-planned course which places sufficient emphasis on the candidates’ own responsibility to research the case study in depth.

It cannot be stressed enough how important it is to plan for this paper a year in advance. One possible way to start is to get the students to contextualize the various terms and ideas , many of which will initially be unfamiliar. Getting the class to construct mind maps linking these terms and ideas is one possibility. Dividing up the additional terminology amongst the class and setting them research over their vacation is another. It is envisaged that research undertaken outside of the classroom will feature heavily.

This paper clearly rewards those students who are prepared to research in depth the various areas in the relevant case study and who are able to demonstrate their understanding in the examination. This should be made clear to

Because this website is no longer updated regularly, please see this website for details on the most recent Paper 3 Case Study: Block Chain Case Study 2020-2021

Note that the 2021 case study is a repeat of the 2020 case study (see the front page of the case study for confirmation)

  • Mathematics: AA SL
  • Mathematics: AA HL
  • Mathematics: AI SL
  • Mathematics: AI HL
  • Biology SL (FE2025)
  • Biology HL (FE2025)
  • Biology SL (FE2016)
  • Biology HL (FE2016)
  • Chemistry SL (FE2025)
  • Chemistry HL (FE2025)
  • Chemistry SL (FE2016)
  • Chemistry HL (FE2016)
  • Physics SL (FE2025)
  • Physics HL (FE2025)
  • Physics SL (FE2016)
  • Physics HL (FE2016)
  • Business SL (FE2024)
  • Business HL (FE2024)
  • Business SL (FE2016)
  • Business HL (FE2016)
  • Economics SL
  • Economics HL
  • Psychology SL
  • Psychology HL
  • Computer Science SL
  • Computer Science HL
  • English A LAL SL
  • English A LAL HL

ib computer science may 2024 case study

Case Study "Before One PLC" for IBDP Business May 2024 Exam

The much-awaited IBDP Business case study for the upcoming May 2024 Examination has now been unveiled. You can download the M24 Business Case Study here .

Case Study "Before One PLC" for IBDP Business May 2024 Exam

Are you ready to enjoy this “Musical Festival” with business management case study?

The IBDP Business Management Paper 1 exam is scheduled for Friday, 26 th April 2024. For detailed information on the examination schedule, click here .

What is the Paper 1 Assessment?

Paper 1 in IB Business Management  encourages a comprehensive approach to the subject. It evaluates all five units of the syllabus and includes an Extended Response Question (Essay) section for Higher Level (HL) students. This assessment task challenges students to enhance their analytical and critical thinking abilities.

What is this M24 Case Study all about?

The IB Organization has released a pre-released case study featuring "Before One PLC," a company that organizes music festivals since 200.

This case study is centered around the idea that global music festivals are on the rise, featuring diverse locations and genres, from commercial events on farmland to community-led gatherings in public parks. One PLC (BON), a European company organizing festivals since 2001, transitioned to a public limited company in 2016 for expansion.

With 60 permanent employees and reliance on temporary workers, BON hosts five festivals yearly, paying farmers an average of $100,000 per event. Cleanup costs average $250,000. The festival season runs from May to August, with each event spanning Friday to Sunday, requiring two weeks for setup and one for dismantling. BON faces challenges in environmental sustainability.

Overview of IBDP Business Paper 1 Assessment

IBDP Business Management  – SL 

IBDP Business management SL.

IBDP Business Management – HL

IBDP Business management HL.

How to Prepare to Ace Your Case Study Paper?

Blen will provide you with valuable tips on how to excel in your IBDP Business Paper 1 Case Study.

Grasp the Case Study : Invest time in carefully reading and analyzing the case study. Make notes, highlight key details, and create a concise summary.

Understand the paper's format : familiarize yourself with the paper's format, as questions typically revolve around all five topics within your ibdp business management syllabus. this will aid you in structuring your responses correctly and managing your time effectively., utilize suitable business terminology : incorporate appropriate business terminology to showcase your knowledge and comprehension of the subject matter..

Blen brings you the full analysis of IBDP Business M24 Case Study. Elevate your IBDP performance with Blen – because why waste time when you can learn from the best?

Start your Free Trial today

Unrestricted access to IBDP questions, key concepts, solutions, performance reports and much more…

*Invite 2 students to subscribe to Blen and unlock FREE Access to IBDP content, assessments, assignments, students’ performance reports. 

IBDP May 2024 Examination Schedule Revealed

Three cheers to achievements of november 2023 ibdp cohort, let’s do some predictions for ibdp n23 exams, unlock new horizons with new ibdp science resources (fe2025), decoding " brondy’s plc " for ibdp business nov 2023 paper 1 examination, celebrating excellence: may 2023 ibdp results globally, a comprehensive guide to navigating your ib results, discover predictive paper for ibdp m23 exams.

ib computer science may 2024 case study

Blen is a powerful learning and teaching platform, thoughtfully designed for the IBDP community. With 100% IBDP Curriculum aligned resources such as interactive questions, key concepts, adaptive mock tests, assignments and detailed reports, unlock your true IBDP potential.

Be the first to know about our news and special offers direct to your inbox.

I consent to the processing of my personal data, in order to receive news and updates of Blen, according to the " Privacy Statement "

Instagram

A generative AI reset: Rewiring to turn potential into value in 2024

It’s time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI’s enormous potential value is harder than expected .

With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI transformations: competitive advantage comes from building organizational and technological capabilities to broadly innovate, deploy, and improve solutions at scale—in effect, rewiring the business  for distributed digital and AI innovation.

About QuantumBlack, AI by McKinsey

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

Companies looking to score early wins with gen AI should move quickly. But those hoping that gen AI offers a shortcut past the tough—and necessary—organizational surgery are likely to meet with disappointing results. Launching pilots is (relatively) easy; getting pilots to scale and create meaningful value is hard because they require a broad set of changes to the way work actually gets done.

Let’s briefly look at what this has meant for one Pacific region telecommunications company. The company hired a chief data and AI officer with a mandate to “enable the organization to create value with data and AI.” The chief data and AI officer worked with the business to develop the strategic vision and implement the road map for the use cases. After a scan of domains (that is, customer journeys or functions) and use case opportunities across the enterprise, leadership prioritized the home-servicing/maintenance domain to pilot and then scale as part of a larger sequencing of initiatives. They targeted, in particular, the development of a gen AI tool to help dispatchers and service operators better predict the types of calls and parts needed when servicing homes.

Leadership put in place cross-functional product teams with shared objectives and incentives to build the gen AI tool. As part of an effort to upskill the entire enterprise to better work with data and gen AI tools, they also set up a data and AI academy, which the dispatchers and service operators enrolled in as part of their training. To provide the technology and data underpinnings for gen AI, the chief data and AI officer also selected a large language model (LLM) and cloud provider that could meet the needs of the domain as well as serve other parts of the enterprise. The chief data and AI officer also oversaw the implementation of a data architecture so that the clean and reliable data (including service histories and inventory databases) needed to build the gen AI tool could be delivered quickly and responsibly.

Never just tech

Creating value beyond the hype

Let’s deliver on the promise of technology from strategy to scale.

Our book Rewired: The McKinsey Guide to Outcompeting in the Age of Digital and AI (Wiley, June 2023) provides a detailed manual on the six capabilities needed to deliver the kind of broad change that harnesses digital and AI technology. In this article, we will explore how to extend each of those capabilities to implement a successful gen AI program at scale. While recognizing that these are still early days and that there is much more to learn, our experience has shown that breaking open the gen AI opportunity requires companies to rewire how they work in the following ways.

Figure out where gen AI copilots can give you a real competitive advantage

The broad excitement around gen AI and its relative ease of use has led to a burst of experimentation across organizations. Most of these initiatives, however, won’t generate a competitive advantage. One bank, for example, bought tens of thousands of GitHub Copilot licenses, but since it didn’t have a clear sense of how to work with the technology, progress was slow. Another unfocused effort we often see is when companies move to incorporate gen AI into their customer service capabilities. Customer service is a commodity capability, not part of the core business, for most companies. While gen AI might help with productivity in such cases, it won’t create a competitive advantage.

To create competitive advantage, companies should first understand the difference between being a “taker” (a user of available tools, often via APIs and subscription services), a “shaper” (an integrator of available models with proprietary data), and a “maker” (a builder of LLMs). For now, the maker approach is too expensive for most companies, so the sweet spot for businesses is implementing a taker model for productivity improvements while building shaper applications for competitive advantage.

Much of gen AI’s near-term value is closely tied to its ability to help people do their current jobs better. In this way, gen AI tools act as copilots that work side by side with an employee, creating an initial block of code that a developer can adapt, for example, or drafting a requisition order for a new part that a maintenance worker in the field can review and submit (see sidebar “Copilot examples across three generative AI archetypes”). This means companies should be focusing on where copilot technology can have the biggest impact on their priority programs.

Copilot examples across three generative AI archetypes

  • “Taker” copilots help real estate customers sift through property options and find the most promising one, write code for a developer, and summarize investor transcripts.
  • “Shaper” copilots provide recommendations to sales reps for upselling customers by connecting generative AI tools to customer relationship management systems, financial systems, and customer behavior histories; create virtual assistants to personalize treatments for patients; and recommend solutions for maintenance workers based on historical data.
  • “Maker” copilots are foundation models that lab scientists at pharmaceutical companies can use to find and test new and better drugs more quickly.

Some industrial companies, for example, have identified maintenance as a critical domain for their business. Reviewing maintenance reports and spending time with workers on the front lines can help determine where a gen AI copilot could make a big difference, such as in identifying issues with equipment failures quickly and early on. A gen AI copilot can also help identify root causes of truck breakdowns and recommend resolutions much more quickly than usual, as well as act as an ongoing source for best practices or standard operating procedures.

The challenge with copilots is figuring out how to generate revenue from increased productivity. In the case of customer service centers, for example, companies can stop recruiting new agents and use attrition to potentially achieve real financial gains. Defining the plans for how to generate revenue from the increased productivity up front, therefore, is crucial to capturing the value.

Jessica Lamb and Gayatri Shenai

McKinsey Live Event: Unlocking the full value of gen AI

Join our colleagues Jessica Lamb and Gayatri Shenai on April 8, as they discuss how companies can navigate the ever-changing world of gen AI.

Upskill the talent you have but be clear about the gen-AI-specific skills you need

By now, most companies have a decent understanding of the technical gen AI skills they need, such as model fine-tuning, vector database administration, prompt engineering, and context engineering. In many cases, these are skills that you can train your existing workforce to develop. Those with existing AI and machine learning (ML) capabilities have a strong head start. Data engineers, for example, can learn multimodal processing and vector database management, MLOps (ML operations) engineers can extend their skills to LLMOps (LLM operations), and data scientists can develop prompt engineering, bias detection, and fine-tuning skills.

A sample of new generative AI skills needed

The following are examples of new skills needed for the successful deployment of generative AI tools:

  • data scientist:
  • prompt engineering
  • in-context learning
  • bias detection
  • pattern identification
  • reinforcement learning from human feedback
  • hyperparameter/large language model fine-tuning; transfer learning
  • data engineer:
  • data wrangling and data warehousing
  • data pipeline construction
  • multimodal processing
  • vector database management

The learning process can take two to three months to get to a decent level of competence because of the complexities in learning what various LLMs can and can’t do and how best to use them. The coders need to gain experience building software, testing, and validating answers, for example. It took one financial-services company three months to train its best data scientists to a high level of competence. While courses and documentation are available—many LLM providers have boot camps for developers—we have found that the most effective way to build capabilities at scale is through apprenticeship, training people to then train others, and building communities of practitioners. Rotating experts through teams to train others, scheduling regular sessions for people to share learnings, and hosting biweekly documentation review sessions are practices that have proven successful in building communities of practitioners (see sidebar “A sample of new generative AI skills needed”).

It’s important to bear in mind that successful gen AI skills are about more than coding proficiency. Our experience in developing our own gen AI platform, Lilli , showed us that the best gen AI technical talent has design skills to uncover where to focus solutions, contextual understanding to ensure the most relevant and high-quality answers are generated, collaboration skills to work well with knowledge experts (to test and validate answers and develop an appropriate curation approach), strong forensic skills to figure out causes of breakdowns (is the issue the data, the interpretation of the user’s intent, the quality of metadata on embeddings, or something else?), and anticipation skills to conceive of and plan for possible outcomes and to put the right kind of tracking into their code. A pure coder who doesn’t intrinsically have these skills may not be as useful a team member.

While current upskilling is largely based on a “learn on the job” approach, we see a rapid market emerging for people who have learned these skills over the past year. That skill growth is moving quickly. GitHub reported that developers were working on gen AI projects “in big numbers,” and that 65,000 public gen AI projects were created on its platform in 2023—a jump of almost 250 percent over the previous year. If your company is just starting its gen AI journey, you could consider hiring two or three senior engineers who have built a gen AI shaper product for their companies. This could greatly accelerate your efforts.

Form a centralized team to establish standards that enable responsible scaling

To ensure that all parts of the business can scale gen AI capabilities, centralizing competencies is a natural first move. The critical focus for this central team will be to develop and put in place protocols and standards to support scale, ensuring that teams can access models while also minimizing risk and containing costs. The team’s work could include, for example, procuring models and prescribing ways to access them, developing standards for data readiness, setting up approved prompt libraries, and allocating resources.

While developing Lilli, our team had its mind on scale when it created an open plug-in architecture and setting standards for how APIs should function and be built.  They developed standardized tooling and infrastructure where teams could securely experiment and access a GPT LLM , a gateway with preapproved APIs that teams could access, and a self-serve developer portal. Our goal is that this approach, over time, can help shift “Lilli as a product” (that a handful of teams use to build specific solutions) to “Lilli as a platform” (that teams across the enterprise can access to build other products).

For teams developing gen AI solutions, squad composition will be similar to AI teams but with data engineers and data scientists with gen AI experience and more contributors from risk management, compliance, and legal functions. The general idea of staffing squads with resources that are federated from the different expertise areas will not change, but the skill composition of a gen-AI-intensive squad will.

Set up the technology architecture to scale

Building a gen AI model is often relatively straightforward, but making it fully operational at scale is a different matter entirely. We’ve seen engineers build a basic chatbot in a week, but releasing a stable, accurate, and compliant version that scales can take four months. That’s why, our experience shows, the actual model costs may be less than 10 to 15 percent of the total costs of the solution.

Building for scale doesn’t mean building a new technology architecture. But it does mean focusing on a few core decisions that simplify and speed up processes without breaking the bank. Three such decisions stand out:

  • Focus on reusing your technology. Reusing code can increase the development speed of gen AI use cases by 30 to 50 percent. One good approach is simply creating a source for approved tools, code, and components. A financial-services company, for example, created a library of production-grade tools, which had been approved by both the security and legal teams, and made them available in a library for teams to use. More important is taking the time to identify and build those capabilities that are common across the most priority use cases. The same financial-services company, for example, identified three components that could be reused for more than 100 identified use cases. By building those first, they were able to generate a significant portion of the code base for all the identified use cases—essentially giving every application a big head start.
  • Focus the architecture on enabling efficient connections between gen AI models and internal systems. For gen AI models to work effectively in the shaper archetype, they need access to a business’s data and applications. Advances in integration and orchestration frameworks have significantly reduced the effort required to make those connections. But laying out what those integrations are and how to enable them is critical to ensure these models work efficiently and to avoid the complexity that creates technical debt  (the “tax” a company pays in terms of time and resources needed to redress existing technology issues). Chief information officers and chief technology officers can define reference architectures and integration standards for their organizations. Key elements should include a model hub, which contains trained and approved models that can be provisioned on demand; standard APIs that act as bridges connecting gen AI models to applications or data; and context management and caching, which speed up processing by providing models with relevant information from enterprise data sources.
  • Build up your testing and quality assurance capabilities. Our own experience building Lilli taught us to prioritize testing over development. Our team invested in not only developing testing protocols for each stage of development but also aligning the entire team so that, for example, it was clear who specifically needed to sign off on each stage of the process. This slowed down initial development but sped up the overall delivery pace and quality by cutting back on errors and the time needed to fix mistakes.

Ensure data quality and focus on unstructured data to fuel your models

The ability of a business to generate and scale value from gen AI models will depend on how well it takes advantage of its own data. As with technology, targeted upgrades to existing data architecture  are needed to maximize the future strategic benefits of gen AI:

  • Be targeted in ramping up your data quality and data augmentation efforts. While data quality has always been an important issue, the scale and scope of data that gen AI models can use—especially unstructured data—has made this issue much more consequential. For this reason, it’s critical to get the data foundations right, from clarifying decision rights to defining clear data processes to establishing taxonomies so models can access the data they need. The companies that do this well tie their data quality and augmentation efforts to the specific AI/gen AI application and use case—you don’t need this data foundation to extend to every corner of the enterprise. This could mean, for example, developing a new data repository for all equipment specifications and reported issues to better support maintenance copilot applications.
  • Understand what value is locked into your unstructured data. Most organizations have traditionally focused their data efforts on structured data (values that can be organized in tables, such as prices and features). But the real value from LLMs comes from their ability to work with unstructured data (for example, PowerPoint slides, videos, and text). Companies can map out which unstructured data sources are most valuable and establish metadata tagging standards so models can process the data and teams can find what they need (tagging is particularly important to help companies remove data from models as well, if necessary). Be creative in thinking about data opportunities. Some companies, for example, are interviewing senior employees as they retire and feeding that captured institutional knowledge into an LLM to help improve their copilot performance.
  • Optimize to lower costs at scale. There is often as much as a tenfold difference between what companies pay for data and what they could be paying if they optimized their data infrastructure and underlying costs. This issue often stems from companies scaling their proofs of concept without optimizing their data approach. Two costs generally stand out. One is storage costs arising from companies uploading terabytes of data into the cloud and wanting that data available 24/7. In practice, companies rarely need more than 10 percent of their data to have that level of availability, and accessing the rest over a 24- or 48-hour period is a much cheaper option. The other costs relate to computation with models that require on-call access to thousands of processors to run. This is especially the case when companies are building their own models (the maker archetype) but also when they are using pretrained models and running them with their own data and use cases (the shaper archetype). Companies could take a close look at how they can optimize computation costs on cloud platforms—for instance, putting some models in a queue to run when processors aren’t being used (such as when Americans go to bed and consumption of computing services like Netflix decreases) is a much cheaper option.

Build trust and reusability to drive adoption and scale

Because many people have concerns about gen AI, the bar on explaining how these tools work is much higher than for most solutions. People who use the tools want to know how they work, not just what they do. So it’s important to invest extra time and money to build trust by ensuring model accuracy and making it easy to check answers.

One insurance company, for example, created a gen AI tool to help manage claims. As part of the tool, it listed all the guardrails that had been put in place, and for each answer provided a link to the sentence or page of the relevant policy documents. The company also used an LLM to generate many variations of the same question to ensure answer consistency. These steps, among others, were critical to helping end users build trust in the tool.

Part of the training for maintenance teams using a gen AI tool should be to help them understand the limitations of models and how best to get the right answers. That includes teaching workers strategies to get to the best answer as fast as possible by starting with broad questions then narrowing them down. This provides the model with more context, and it also helps remove any bias of the people who might think they know the answer already. Having model interfaces that look and feel the same as existing tools also helps users feel less pressured to learn something new each time a new application is introduced.

Getting to scale means that businesses will need to stop building one-off solutions that are hard to use for other similar use cases. One global energy and materials company, for example, has established ease of reuse as a key requirement for all gen AI models, and has found in early iterations that 50 to 60 percent of its components can be reused. This means setting standards for developing gen AI assets (for example, prompts and context) that can be easily reused for other cases.

While many of the risk issues relating to gen AI are evolutions of discussions that were already brewing—for instance, data privacy, security, bias risk, job displacement, and intellectual property protection—gen AI has greatly expanded that risk landscape. Just 21 percent of companies reporting AI adoption say they have established policies governing employees’ use of gen AI technologies.

Similarly, a set of tests for AI/gen AI solutions should be established to demonstrate that data privacy, debiasing, and intellectual property protection are respected. Some organizations, in fact, are proposing to release models accompanied with documentation that details their performance characteristics. Documenting your decisions and rationales can be particularly helpful in conversations with regulators.

In some ways, this article is premature—so much is changing that we’ll likely have a profoundly different understanding of gen AI and its capabilities in a year’s time. But the core truths of finding value and driving change will still apply. How well companies have learned those lessons may largely determine how successful they’ll be in capturing that value.

Eric Lamarre

The authors wish to thank Michael Chui, Juan Couto, Ben Ellencweig, Josh Gartner, Bryce Hall, Holger Harreis, Phil Hudelson, Suzana Iacob, Sid Kamath, Neerav Kingsland, Kitti Lakner, Robert Levin, Matej Macak, Lapo Mori, Alex Peluffo, Aldo Rosales, Erik Roth, Abdul Wahab Shaikh, and Stephen Xu for their contributions to this article.

This article was edited by Barr Seitz, an editorial director in the New York office.

Explore a career with us

Related articles.

Light dots and lines evolve into a pattern of a human face and continue to stream off the the side in a moving grid pattern.

The economic potential of generative AI: The next productivity frontier

A yellow wire shaped into a butterfly

Rewired to outcompete

A digital construction of a human face consisting of blocks

Meet Lilli, our generative AI tool that’s a researcher, a time saver, and an inspiration

IMAGES

  1. ib computer science case study 2022

    ib computer science may 2024 case study

  2. ib computer science case study 2022

    ib computer science may 2024 case study

  3. PPT

    ib computer science may 2024 case study

  4. PPT

    ib computer science may 2024 case study

  5. IB Computer Science Paper 3 Case Study

    ib computer science may 2024 case study

  6. PPT

    ib computer science may 2024 case study

VIDEO

  1. Intro Computer Science May 17, 2022

  2. Computer Science 1 : Lecture 3

  3. IB Computer Science

  4. AP Computer Science Principles: 2020-21 Updates

  5. Topic 3. 3.1.1 IB CS (Group 3)

  6. Topic 1. 1.1.1 IB CS (Group 4)

COMMENTS

  1. 2024 case study

    Introduction. Higher-level students must write 3 papers. The case study is the third paper. Every year the case study discusses a different topic. Students must become very familiar with the case study. The IB recommends spending about a year studying this case study. This page will help you organize and understand the 2024 case study .

  2. Computer Science HL Case Study for 2024 : r/IBO

    Resources. The Case Study for 2024 exams (M24 and N24) has been released: "Rescue Robots." Topics include computer vision, robotic navigation of an unknown area, recognizing human poses, etc. All HL students should spend time this summer reading and researching the topic, particularly getting comfortable with the vocabulary.

  3. IB cs case study 2024

    Computer science Case study: Rescue robots For use in May and November 2024 Instructions to candidates y Case study booklet required for higher level paper 3. 8 pages 5524 - 6701 © International Baccalaureate Organization 2023 -2- 5524 - 6701 Scenario 5 10 BotPro is a company that makes robots for various industrial applications.

  4. Paper 3 Practice Papers + Guide + Slides (2024)

    This guide includes all of the content that you need to know for Paper 3 of the IB Computer Science HL exam. In addition, 2 practice papers with sample answers are included, written by an experienced IB Computer Science teacher.Also included are the slides from the Youtube video on this topic.Note: This corresponds to the 2024 Case Study: Rescue Robot.Click here to see a preview.

  5. Paper 3 (HL only)

    External Assessment — Paper 3 # Paper 3 asks a number of questions related to a pre-released case study. Here is the case study for use in May and November 2024 Case studies from other years. The maximum number of marks you can get for Paper 3 is 30. Your Paper 3 score translates into 20% of your final HL grade, see grade boundaries. Grade boundaries # Computer science course has a variety ...

  6. IB Computer Science

    Need to cram? Check out my handy guide for Paper 3, which also includes some extra information not covered in the video, 2 mock practice papers with sample a...

  7. PDF Computer science Case study: Rescue robots

    For use in May and ovember 2024 Computer science Case study: Rescue robots Instructions to candidates y Case study booklet required for higher level paper 3. ... The rescue robot may need to rely on large databases and the processing power of central computers. 5 10 15 20 25 30 35

  8. Ib Computer Science

    CASE STUDY SAMPLE PAPERS AND SAMPLE QUESTIONS. CASE STUDY SAMPLE ANSWERS. A great place for IB Computer Science teachers and students, offering in-depth learning materials for most topics, engaging sample questions, and valuable resources to enhance your understanding of the curriculum and excel in your studies.

  9. Case Study

    Case Study. from 2024 IB Diploma Programme Curriculum Booklet. by SJI International. Next Article. from 2024 IB Diploma Programme Curriculum Booklet, page 39. Computer Science. 4. What will be the ...

  10. Computer Science

    The table below shows an outline of the content and teaching hours for SL Computer Science: The topics that must be studied, including some practical work, are: • Topic 1: System fundamentals ...

  11. Conquer the IB Diploma Computer Science Exam & IA(2024-2025)

    This is THE course to help you be completely prepared for the International Baccalaureate Diploma exams in Computer Science. With a focus on exam preparation, this course includes practice questions and both study techniques and test-taking strategies to help you feel confident and prepared on exam day. This course is COMPLETELY UP-TO-DATE and ...

  12. PDF May 2024 examination schedule FINAL VERSION All exam zones (A, B, C)

    May 2024 examination schedule . FINAL VERSION . All exam zones (A, B, C) ... Computer science HL paper 1 Computer science SL paper 1 . 2h 10m 1h 30m . Environmental systems & s ocieties SL paper 1 1h . Monday 29 April ... Principles used in creating the IB Examination Schedule:

  13. IB Computer Science CS Case Study 2024 : r/IBO

    IB Computer Science CS Case Study 2024. Group 4. The Case Study for the 2024 exams (M24 and N24) has been released: "Rescue Robots." Topics include computer vision, robotic navigation of an unknown area, recognizing human poses, etc. What do you think about this Case Study, Any Ideas? Vote.

  14. Hockerill Computing

    IB Computer Science Wikibook - Very good but has gaps: ... Case study (30) [AHL] Additional subject content introduced by the annually issued case study. Case study for 2024 exam: Rescue robots Options. Students study one of the following options (30), at AHL the option is studied in greater depth (+15) ...

  15. 2024 Case Study Rescue Robots

    This video by 'The CS Classroom' looks at the 2024 Case Study introducing you to concepts like vSLAM, LIDAR, and the IMU, Dead Reckoning, Drift, and the vSLAM process, followed by a detailed look at the hardware setup and the underlying algorithms. The video also sheds light on advanced topics such as Relocalisation, Map Optimization, and the ...

  16. IB Computer Science Paper 3 Case Study 2024 Vocabulary

    A computer vision task that involves determining the position and orientation of the human body, along with the positions of various body parts such as the head, arms, legs, and so on, usually in real-time. Inertial measurement unit (IMU) A device that measures and reports on a vehicle's velocity, orientation, and gravitational forces, using a ...

  17. PDF Computer science Case study: May I recommend the following?

    Artists may include actors, singers, screenwriters, comedians, painters, sculptors and filmmakers. In fact, any artist who wants to demonstrate a talent will be able to upload files to the application. The uploaded content can be rated by all users. Based on these ratings, the application recommends new content to each user.

  18. Computer science in DP

    The IB computer science course is a rigorous and practical problem-solving discipline. Features and benefits of the curriculum and assessment of are as follows: Two course levels are offered; standard level (SL) and higher level (HL). Computer science candidates are not limited by a defined study level so can opt for this course in the same way ...

  19. PDF CASE STUDY PACK MAY 2024

    This comprehensive Case Study Pack (CSP) has been produced to help students in their preparations for the May 2024 Paper 1 examination (Before One PLC). The new format of the Paper 1 CSP contains the following: 1. Glossary of key terms from the pre-release statement.

  20. Paper 3

    Paper 3. For HL students only, the third exam involves doing research on a topic that is released by the IBO every year. Here is what the moderator suggests in preparation for this exam: Higher Level Paper 3 is a paper that demands significant research on the part of the candidate, guided, of course, by the class teacher. When it comes to ...

  21. 2023 CompSci Case Study: All Resources You Need : r/IBO

    2023 CompSci Case Study: All Resources You Need : r/IBO. by SnooMaps7848. M23 | [HL: Math, Physics, CS; SL: Eng A, Ger B, Economics] View community ranking In the Top 1% of largest communities on Reddit.

  22. Case Study "Before One PLC" for IBDP Business May 2024 Exam

    The IB Organization has released a pre-released case study featuring "Before One PLC," a company that organizes music festivals since 200. This case study is centered around the idea that global music festivals are on the rise, featuring diverse locations and genres, from commercial events on farmland to community-led gatherings in public parks ...

  23. Ib Computer Science Case Study

    Rescue Robots case study 2024. Here we have a collection of useful resources on this topic related to the case study, learning material and sample questions. ... IB COMPUTER SCIENCE CASE STUDY 2023 : (MACHINE LEARNING) May I recommend the following? Is the 2023 case study based on Machine Learning principles. Here we will try to add a ...

  24. A generative AI reset: Rewiring to turn potential into value in 2024

    It's time for a generative AI (gen AI) reset. The initial enthusiasm and flurry of activity in 2023 is giving way to second thoughts and recalibrations as companies realize that capturing gen AI's enormous potential value is harder than expected.. With 2024 shaping up to be the year for gen AI to prove its value, companies should keep in mind the hard lessons learned with digital and AI ...

  25. Virtual reality better than video for evoking fear, spurring climate

    (2024, March 18). Virtual reality better than video for evoking fear, spurring climate action. ScienceDaily. Retrieved March 20, 2024 from www.sciencedaily.com / releases / 2024 / 03 ...