Code With C

The Way to Programming

  • C Tutorials
  • Java Tutorials
  • Python Tutorials
  • PHP Tutorials
  • Java Projects

Understanding Abstraction in Computer Science: A Key Concept for Programmers

CodeLikeAGirl

Understanding Abstraction in Computer Science: A Key Concept for Programmers 🖥️

Hey there, tech-savvy folks! Today, we’re delving into the fascinating world of abstraction in computer science. As an code-savvy friend 😋 with a passion for coding, I can’t wait to unravel this concept and see how it shapes the way we write programs. So, buckle up and let’s navigate through the depths of abstraction together!

Definition of Abstraction in Computer Science

Alright, first things first – what on earth is abstraction in the context of computer science ? 🤔 Well, my friends, abstraction is a concept that allows us to focus on essential details while ignoring the irrelevant ones. It’s like zooming in on the big picture without getting caught up in all the nitty-gritty details. In the world of programming, abstraction empowers us to build complex systems by compartmentalizing information and operations.

Now, why is this whole abstraction thing so important, you ask? Picture this: You’re building a massive software application with thousands of lines of code. Without abstraction, you’d be drowning in a sea of intricate details, making it nearly impossible to manage and understand your own creation. But fret not, my fellow coders! Abstraction acts as a guiding light, helping us tame the complexity of our programs and make our lives a whole lot easier. Phew!

Levels of Abstraction

Next up, let’s talk about the different levels of abstraction that make the programming world go ’round. We’ve got the high-level abstraction, where we’re cruising in the clouds of generalization and simplicity. Then there’s the low-level abstraction, where we’re getting down and dirty with the inner workings of the system. Each level brings its own set of challenges and rewards, so let’s break it down, shall we?

High-level Abstraction

At the high level, we can think big, dream big, and work with concepts that are closer to our human understanding. Here, we’re abstracting away the intricate details, focusing on the broader structure and functionality of our programs. It’s like looking at a painting from a distance – you see the whole masterpiece without being fixated on individual brushstrokes.

Low-level Abstraction

Now, hold on tight as we take a nosedive into the low-level abstraction zone! This is where we get up close and personal with the inner workings of our programs . We’re talking memory addresses , CPU registers, and other nitty-gritty details that make our software purr like a contented kitten. It’s like inspecting each pixel in that painting we mentioned earlier – intense, intricate, and oh-so-essential.

Examples of Abstraction in Computer Science

Alright, enough with the theory – let’s bring abstraction to life with some real-world examples. When it comes to programming, abstraction wears many hats, but two of the most popular ones are data abstraction and control abstraction. Let’s take a peek at what they’re all about, shall we?

Data Abstraction

Imagine you’re working with a massive dataset, juggling hundreds of variables and complex data structures . Data abstraction swoops in like a superhero, allowing us to hide the implementation details and work with a simplified interface. So, we get to play with the data without getting overwhelmed by its intricacies. It’s like using a vending machine – you select your snack without needing to understand the internal mechanics of how it dispenses it.

Control Abstraction

Now, let’s shift gears to control abstraction. This beauty allows us to encapsulate a sequence of operations into a single, easily understandable unit. Ever used a function in your code? That’s a prime example of control abstraction right there! It simplifies the flow of our programs, making them more manageable and less prone to errors. It’s like having a TV remote – you press a button, and it magically handles a bunch of complex commands to change the channel. How cool is that?

Implementation of Abstraction in Programming Languages

Alright, now that we’ve got a good grip on what abstraction is all about, let’s explore how it’s implemented in the wonderful world of programming languages . Brace yourselves as we take a peek into the realms of object-oriented programming and functional programming – two methodologies that are dear to every programmer’s heart.

Object-oriented Programming

Ah, object-oriented programming – the playground of classes , objects, and inheritance! Here, abstraction is a cornerstone, allowing us to create classes that abstract away the complexities of our data and its associated operations. It’s like building a LEGO castle – you work with individual bricks (objects) to create complex structures without needing to know the detailed composition of each brick.

Functional Programming

On the flip side, we have functional programming, where abstraction takes a slightly different route. In this paradigm, we’re all about abstracting operations and behaviors into pure, mathematical functions . It’s like conducting a scientific experiment – you define a function to perform a specific task without worrying about the intricate internal workings of the function itself.

Benefits of Abstraction in Computer Science

Now, let’s talk about the sweet rewards of embracing abstraction in our coding endeavors. This isn’t just some fancy theory – the benefits of abstraction are as real as it gets! So, buckle up as we navigate through the perks of wielding this powerful tool in our programming arsenal.

Code Reusability

With abstraction by our side, we can build reusable components that plug seamlessly into different parts of our codebase. It’s like having a magical toolbox filled with versatile gadgets that can be used over and over again. Instead of reinventing the wheel every time, we can simply grab the wheel from our toolbox and roll with it. Efficient, right?

Improved Readability and Maintainability

Ah, readability and maintainability – the unsung heroes of software development ! Abstraction lends a helping hand in making our codebase more understandable and maintainable. By hiding the complex details behind simplified interfaces, we make it easier for fellow programmers (or our future selves) to grasp and modify the code. It’s like tidying up your room – an organized space makes it easier to find what you need and spruce things up when necessary.

Overall, abstraction in computer science is like a superhero cape for programmers – it empowers us to conquer complexity and build elegant, maintainable software. With its levels, examples, implementation, and benefits, abstraction isn’t just a fancy word – it’s the secret sauce that flavors our programming adventures. So, embrace it, wield it, and let abstraction lead the way to coding nirvana! Until next time, happy coding, folks! ✨🚀

Random Fact: Did you know that the concept of abstraction dates back to ancient philosophy, where it was used to describe the process of distancing oneself from sensory experiences? Pretty neat, huh?

Program Code – Understanding Abstraction in Computer Science: A Key Concept for Programmers

Code output:.

  • ‘All vehicles transport people or goods. I’m a Car.’
  • ‘I travel on roads with 4 wheels.’
  • ‘All vehicles transport people or goods. I’m a Boat.’
  • ‘I sail on water.’

Code Explanation:

The code demonstrates an example of abstraction in computer science using Python. Here’s how it rolls out:

  • It starts with importing ABC and abstractmethod from the abc module, which provides the infrastructure for defining abstract base classes (ABCs) in Python. This is pivotal for creating a framework that implements abstraction.
  • The Vehicle abstract class is then defined with a constructor to set the type of vehicle and an abstract method transportation_mode , which, well, remains unimplemented. This enforces subclasses to provide their specific transportation logic because let’s face it, a car doesn’t swim and a boat doesn’t vroom on the streets, right?
  • common_vehicle_behavior is a concrete method within the Vehicle class. It’s the shared functionality that all vehicles, regardless of their type, will have. It’s that one line every vehicle would quote on their Tinder profile: ‘I transport peeps and stuff.’
  • Then come the big players: Car and Boat . They’re like the grown-up kids from the Vehicle family, and they’re ready to show the world how they roll (or float). Both implement transportation_mode in their own unique ways, which is the soul of abstraction – the specific hidden behind the general.
  • Finally, the main sanctuary. The main block creates instances of Car and Boat , and calls their methods to exhibit how each vehicle class follows the same structure (thanks to dear old Vehicle), yet behaves differently in its own special, ‘wheely’ or ‘splashy’, way.

In essence, the code encapsulates the concept of abstraction: defining a common interface for various implementations that follow the same structure but differ in internal details. It’s like saying, ‘You do you, as long as you stick to the family rules.’

You Might Also Like

Revolutionary feedback control project for social networking: enhancing information spread, the evolution of web development: past, present, and future, defining the future: an overview of software services, exploring the layers: the multifaceted world of software, deep dive: understanding the ‘case when’ clause in sql.

Avatar photo

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Latest Posts

65 Top Python Project for Boosting Your Resume

Top Python Project for Boosting Your Resume

75 Essential Python Project for Beginners: Complete PDF Guide | Python Projects

Essential Python Project for Beginners: Complete PDF Guide | Python Projects

85 Ultimate Python Package Project Guide

Ultimate Python Package Project Guide

84 Unleash Your Creativity with This Interesting Python Project!

Unleash Your Creativity with This Interesting Python Project!

69 Python ProjectsUnleash Your Coding Skills with These Exciting Python Project Ideas!

Python ProjectsUnleash Your Coding Skills with These Exciting Python Project Ideas!

Privacy overview.

Sign in to your account

Username or Email Address

Remember Me

National Academies Press: OpenBook

Computer Science: Reflections on the Field, Reflections from the Field (2004)

Chapter: 4 abstraction, representation, and notations, 4 abstraction, representation, and notations.

M odels capture phenomena—of the world or of the imagination—in such a way that a general-purpose computer can emulate, simulate, or create the phenomena. But the models are usually not obvious. The real world is complex and nonlinear, there’s too much detail to deal with, and relationships among the details are often hidden. Computer scientists deal with this problem by careful, deliberate creation of abstractions that express the models. These abstractions are represented symbolically, in notations appropriate to the phenomena. The design of languages for these models and for analyzing, processing, and executing them is a core activity of computer science.

Indeed, abstraction is a quintessential activity of computer science—the intellectual tool that allows computer scientists to express their understanding of a problem, manage complexity, and select the level of detail and degree of generality they need at the moment. Computer scientists create and discard abstractions as freely as engineers and architects create and discard design sketches.

Shaw describes the role of abstraction in building software, both the stuff of programs—algorithms and representations—and the role that specification and formal reasoning play in developing those abstractions. Specific software-design techniques such as information hiding and hierarchical organization provide ways to organize the abstract definitions and the information they control. Aho and Larus describe how programming languages provide a notation to encode abstractions so as to allow their direct execution by computer.

ABSTRACTION: IMPOSING ORDER ON COMPLEXITY IN SOFTWARE DESIGN

Mary Shaw, Carnegie Mellon University

The success of a complex designed system depends on the correct organization and interaction of thousands, even millions, of individual parts. If the designer must reason about all the parts at once, the complexity of the design task often overwhelms human capability. Software designers, like other designers, manage this complexity by separating the design task into relatively independent parts. Often, this entails designing large systems as hierarchical collections of subsystems, with the subsystems further decomposed into sub-subsystems, and so on until the individual components are of manageable size.

For typical consumer products, the subsystems are physical components that can be put together on assembly lines. But the principle of hierarchical system organization does not require an assembly line. Simon 1 tells a parable of two watchmakers, Hora and Tempus. Both made excellent watches and were often visited by their customers. Their watches were similar, each with about 1000 parts, but Hora prospered while Tempus became progressively poorer and eventually lost his shop. Tempus, it seems, made his watches in such a way that a partially assembled watch fell apart any time he put it down to deal with an interruption. Hora, on the other hand, made stable subassemblies of about 10 parts and assembled these into 10 larger assemblies, then joined these to make each watch. So any time one of the watchmakers was interrupted by a customer, Tempus had to restart from scratch on the current watch, but Hora only lost the work of the current 10-unit assembly—a small fraction of Tempus’ loss.

Software systems do not require manual assembly of parts, but they are large, complex, and amenable to a similar sort of discipline. Software design benefits from hierarchical system organization based on subsystems that are relatively independent and that have known, simple, interactions. Software designers create conceptual subassemblies with coherent, comprehensible capabilities, similar to Hora’s subassemblies. But whereas Hora’s subassemblies might have been selected for convenience and physical organization, computer scientists are more likely to create structure around concepts and responsibilities. In doing so they can often state the idea, or abstraction , that is realized by the structure; for example, the capabilities of a software component are often described in

terms of the component’s observable properties, rather than the details of the component’s implementation. While these abstractions may correspond to discrete software components (the analog of physical parts), this is not necessarily the case. So, for example, a computer scientist might create an abstraction for the software that computes a satellite trajectory but might equally well create an abstraction for a communication protocol whose implementation is woven through all the separate software components of a system. Indeed, the abstractions of computer science can be used in non-hierarchical as well as hierarchical structures. The abstractions of computer science are not in general the grand theories of the sciences (though we have those as well; see Kleinberg and Papadimitriou in Chapter 2 ), but rather specific conceptual units designed for specific tasks.

We represent these software abstractions in a combination of notations—the descriptive notations of specifications, the imperative notations of programming, the descriptive notations of diagrams, and even narrative prose. This combination of descriptive and imperative languages provides separate descriptions of what is to be done (the specification) and how it is to be done (the implementation). A software component corresponding to an abstraction has a descriptive (sometimes formal) specification of its abstract capabilities, an operational (usually imperative) definition of its implementation, and some assurance—with varying degrees of rigor and completeness—that the specification is consistent with the implementation. Formal descriptive notations, in particular, have evolved more or less together with operational notations, and progress with each depends on progress with the other. The result is that we can design large-scale systems software purposefully, rather than through pure virtuosity, craft, or blind luck. We have not achieved—indeed, may never achieve—the goal of complete formal specifications and programming-language implementations that are verifiably consistent with those specifications. Nevertheless, the joint history of these notations shows how supporting abstractions at one scale enables exploration of abstractions at a larger scale.

Abstractions in Software Systems

In the beginning—that is to say, in the 1950s—software designers expressed programs and data directly in the representation provided by the computer hardware or in somewhat more legible “assembly languages” that mapped directly to the hardware. This required great conceptual leaps from problem domain to machine primitives, which limited the sophistication of the results. The late 1950s saw the introduction of programming languages that allowed the programmer to describe com-

putations through formulas that were compiled into the hardware representation. Similarly, the descriptions of information representation originally referred directly to hardware memory locations (“the flag field is bits 6 to 8 of the third word of the record”). Programming languages of the 1960s developed notations for describing information in somewhat more abstract terms than the machine representation, so that the programmer could refer directly to “flag” and have that reference translated automatically to whichever bits were appropriate. Not only are the more abstract languages easier to read and write, but they also provide a degree of decoupling between the program and the underlying hardware representation that simplifies modification of the program.

In 1967 Knuth 2 showed us how to think systematically about the concept of a data structure (such as a stack, queue, list, tree, graph, matrix, or set) in isolation from its representation and about the concept of an algorithm (such as search, sort, traversal, or matrix inversion) in isolation from the particular program that implements it. This separation liberated us to think independently about the abstraction —the algorithms and data descriptions that describe a result and its implementation —the specific program and data declarations that implement those ideas on a computer.

The next few years saw the development of many elegant and sophisticated algorithms with associated data representations. Sometimes the speed of the algorithm depended on a special trick of representation. Such was the case with in-place heapsort, a sorting algorithm that begins by regarding—abstracting—the values to be sorted as a one-dimensional unsorted array. As the heapsort algorithm runs, it rearranges the values in a particularly elegant way so that one end of the array can be abstracted as a progressively growing tree, and when the algorithm terminates, the entire array has become an abstract tree with the sorted values in a simple-to-extract order. In most actual programs that implemented heapsort, though, these abstractions were not described explicitly, so any programmer who changed the program had to depend on intuition and sketchy, often obsolete, prose documentation to determine the original programmer’s intentions. Further, the program that implemented the algorithms had no special relation to the data structures. This situation was fraught with opportunities for confusion and for lapses of discipline, which led to undocumented (frequently unintended) dependencies on representation tricks. Unsurprisingly, program errors often occurred when another programmer subsequently changed the data representation. In response to this problem, in the 1970s a notion of “type” emerged to help document

the intended uses of data. For example, we came to understand that referring to record fields abstractly—by a symbolic name rather than by absolute offset from the start of a data block—made programs easier to understand as well as to modify, and that this could often be done without making the program run slower.

At the same time, the intense interest in algorithms dragged representation along as a poor cousin. In the early 1970s, there was a growing sense that “getting the data structures right” was a key to good software design. Parnas 3 elaborated this idea, arguing that a focus on data structures should lead to organizing software modules around data structures rather than around collections of procedures. Further, he advanced the then-radical proposition that not all information about how data is represented should be shared, because programmers who used the data would rely on things that might subsequently change. Better, he said, to specify what a module would accomplish and allow privileged access to the details only for selected code whose definition was in the same module as the representation. The abstract description should provide all the information required to use the component, and the implementer of the component would only be obligated to keep the promises made in that description. He elaborated this idea as “information hiding.” Parnas subsequently spent several years at the Naval Research Laboratory applying these ideas to the specification of the A7E avionics system, showing that the idea could scale up to practical real-world systems.

This was one of the precursors of object-oriented programming and the marketplace for independently developed components that can be used unchanged in larger systems, from components that invoke by procedure calls from a larger system through java applets that download into Web browsers and third-party filters for photo-processing programs. Computer scientists are still working out the consequences of using abstract descriptions to encapsulate details. Abstractions can, in some circumstances, be used in many software systems rather than customdefined for a specific use. However, the interactions between parts can be subtle—including not only the syntactic rules for invoking the parts but also the semantics of their computations—and the problems associated with making independently developed parts work properly together remain an active research area.

So why isn’t such a layered abstract description just a house of cards, ready to tumble down in the slightest whiff of wind? Because we partition our tasks so that we deal with different concerns at different levels of

abstraction; by establishing reasonable confidence in each level of abstraction and understanding the relations between the levels, we build our confidence in the whole system. Some of our confidence is operational: we use tools with a demonstrated record of success. Chief among these tools are the programming languages, supported by compilers that automatically convert the abstractions to code (see Aho and Larus in this chapter). Other confidence comes from testing—a kind of end-to-end check that the actual software behaves, at least to the extent we can check, like the system we intended to develop. Deeper confidence is instilled by formal analysis of the symbolic representation of the software, which brings us to the second part of the story.

Specifications of Software Systems

In the beginning, programming was an art form and debugging was very much ad hoc. In 1967, Floyd 4 showed how to reason formally about the effect a program has on its data. More concretely, he showed that for each operation a simple program makes, you can state a formal relation between the previous and following program state; further, you can compose these relations to determine what the program actually computes. Specifically he showed that given a program, a claim about what that program computes, and a formal definition of the programming language, you can derive the starting conditions, if any, for which that claim is true. Hoare and Dijkstra created similar but different formal rules for reasoning about programs in Pascal-like languages in this way.

The immediate reaction, that programs could be “proved correct” (actually, that the implementation of a program could be shown to be consistent with its specification) proved overly optimistic. However, the possibility of reasoning formally about a program changed the way people thought about programming and stimulated interest in formal specification of components and of programming languages—for precision in explanation, if not for proof. Formal specifications have now been received well for making intentions precise and for some specific classes of analysis, but the original promise remains unfulfilled. For example, there remains a gap between specifications of practical real-world systems and the complete, static specifications of the dream. Other remaining problems include effective specifications of properties other than functionality, tractability of analysis, and scaling to problems of realistic size.

In 1972, Hoare 5 showed how to extend this formalized reasoning to encapsulations of the sort Parnas was exploring. This showed how to formalize the crucial abstraction step that expresses the relation between the abstraction and its implementation. Later in the 1970s, theoretical computer scientists linked the pragmatic notion of types that allowed compilers to do some compile-time checking to a theoretical model of type theory.

One of the obstacles to “proving programs correct” was the difficulty in creating a correct formal definition of the programming language in which the programs were written. The first approach was to add formal specifications to the programming language, as in Alphard, leaving proof details to the programmer. The formal analysis task was daunting, and it was rarely carried out. Further, many of the properties of interest about a particular program do not lend themselves to expression in formal logic. The second approach was to work hard on a simple common programming language such as Pascal to obtain formal specifications of the language semantics with only modest changes to the language, with a result such as Euclid. This revealed capabilities of programming languages that do not lend themselves to formalization. The third approach was to design a family of programming languages such as ML that attempt to include only constructs that lend themselves to formal analysis (assuming, of course, a correct implementation of the compiler). These languages require a style of software development that is an awkward match for many software problems that involve explicit state and multiple cooperating threads of execution.

Formal specifications have found a home in practice not so much in verification of full programs as in the use of specifications to clarify requirements and design. The cost of repairing a problem increases drastically the later the problem is discovered, so this clarification is of substantial practical importance. In addition, specific critical aspects of a program may be analyzed formally, for example through static analysis or model checking.

The Interaction of Abstraction and Specification

This brings us to the third part of our story: the coupling between progress in the operational notations of programming languages and the descriptive notations of formal specification systems. We can measure

progress in programming language abstraction, at least qualitatively, by the scale of the supported abstractions—the quantity of machine code represented by a single abstract construct. We can measure progress in formal specification, equally qualitatively, by the fraction of a complex software system that is amenable to formal specification and analysis. And we see in the history of both, that formal reasoning about programs has grown hand in hand with the capability of the languages to express higher-level abstractions about the software. Neither advances very far without waiting for the other to catch up.

We can see this in the development of type systems. One of the earliest type systems was the Fortran variable naming convention: operations on variables whose names began with I, J, K, L, or M were compiled with fixed-point arithmetic, while operations on all other variables were compiled with floating-point arithmetic. This approach was primitive, but it provided immediate benefit to the programmer, namely correct machine code. A few years later, Algol 60 provided explicit syntax for distinguishing types, but this provided little benefit to the programmer beyond the fixed/floating point discrimination—and it was often ignored. Later languages that enforced type checking ran into programmer opposition to taking the time to write declarations, and the practice became acceptable only when it became clear that the type declarations enabled analysis that was immediately useful, namely discovering problems at compile time rather than execution time.

So type systems originally entered programming languages as a mechanism for making sure at compile time that the run-time values supplied for expression evaluation or procedure calls would be legitimate. (Morris later called this “Neanderthal verification.”) But the nuances of this determination are subtle and extensive, and type systems soon found a role in the research area of formal semantics of programming languages. Here they found a theoretical constituency, spawning their own problems and solutions.

Meanwhile, abstract data types were merging with the inheritance mechanisms of Smalltalk to become object-oriented design and programming models. The inheritance mechanisms provided ways to express complex similarities among types, and the separation of specification from implementation in abstract data types allowed management of the code that implemented families of components related by inheritance. Inheritance structures can be complex, and formal analysis techniques for reasoning about these structures soon followed.

With wider adoption of ML-like languages in the 1990s, the functional programming languages began to address practical problems, thereby drawing increasing attention from software developers for whom

correctness is a critical concern—and for whom the prospect of assurances about the software justifies extra investment in analysis.

The operational abstraction and symbolic analysis lines of research made strong contact again in the development of the Java language, which incorporates strong assurances about type safety with object-oriented abstraction.

So two facets of programming language design—language mechanisms to support abstraction and incorporation of formal specification and semantics in languages—have an intertwined history, with advances on each line stimulated by problems from both lines, and with progress on one line sometimes stalled until the other line catches up.

Additional Observations

How are the results of research on languages, models, and formalisms to be evaluated? For operational abstractions, the models and the detailed specifications of relevant properties have a utilitarian function, so appropriate evaluation criteria should reflect the needs of software developers. Expertise in any field requires not only higher-order reasoning skills, but also a large store of facts, together with a certain amount of context about their implications and appropriate use. 6 It follows that models and tools intended to support experts should support rich bodies of operational knowledge. Further, they should support large vocabularies of established knowledge as well as the theoretical base for deriving information of interest.

Contrast this with the criteria against which mathematical systems are evaluated. Mathematics values elegance and minimality of mechanism; derived results are favored over added content because they are correct and consistent by their construction. These criteria are appropriate for languages whose function is to help understand the semantic basis of programming languages and the possibility of formal reasoning.

Given the differences in appropriate base language size that arise from the different objectives, it is small wonder that different criteria are appropriate, or that observers applying such different criteria reach different conclusions about different research results.

PROGRAMMING LANGUAGES AND COMPUTER SCIENCE

Alfred V. Aho, Columbia University, and James Larus, Microsoft Research

Software affects virtually every modern person’s life, often profoundly, but few appreciate the vast size and scope of the worldwide infrastructure behind it or the ongoing research aimed at improving it. Hundreds of billions of lines of software code are currently in use, with many more billions added annually, and they virtually run the gamut of conceivable applications. It has been possible to build all this software because we have been successful in inventing a wide spectrum of programming languages for describing the tasks we want computers to do. But like human languages, they are sometimes quirky and imperfect. Thus computer scientists are continually evolving more accurate, expressive, and convenient ways in which humans may communicate to computers.

Programming languages are different in many respects from human languages. A computer is capable of executing arithmetic or logical operations at blinding speeds, but it is in fact a device that’s frustratingly simpleminded—forever fixed in a concrete world of bits, bytes, arithmetic, and logic (see Hill in Chapter 2 ). Thus a computer must be given straightforward, unambiguous, step-by-step instructions. Humans, by contrast, can often solve highly complex problems using their innate strengths of formulating and employing abstraction.

To get a feel for the extent of this “semantic gap,” imagine explaining to a young child how to prepare a meal. Given that the child likely has no experience or context to draw upon, every step must be described clearly and completely, and omitting even the simplest detail can lead to a messy failure. Explaining tasks to computers is in many ways more difficult, because computers not only require far more detail but that detail must also be expressed in a primitive difficult-to-read notation such as binary numbers.

As an example of how programming languages bridge the gap between programmers and computers, consider numerical computation, one of the earliest applications of computers, dating back to World War II. A common mathematical operation is to multiply two vectors of numbers. Humans will use a notation such as A*B to indicate the multiplication (i.e., dot product) of vector A and vector B—knowing that this is shorthand for all of the individual steps actually needed to perform the multiplication. Computers, on the other hand, know nothing about vectors or the rules for multiplying them. They can only move numbers around; perform addition, multiplication, and other primitive mathematical opera-

tions on them; and make simple decisions. Expressed in terms of these primitive operations, a simple vector multiplication routine might require roughly 20 computer instructions, while a more sophisticated version, which improves performance by using techniques like instruction-level parallelism and caches (see Hill in Chapter 2 ), might require a few hundred instructions. Someone looking at this machine-language routine could easily be excused for not spotting the simple mathematical operation embodied in the complicated sequence of machine instructions.

A high-level programming language addresses this “semantic gap” between human and machine in several ways. It can provide operations specifically designed to help formulate and solve a particular type of problem. A programming language specifically intended for numeric computation might use the human-friendly, concise notation A*B. It saves programmers from repeatedly reimplementing (or mis-implementing) the same operations. A software tool called a “compiler” translates the higher-level program into instructions executable by a computer.

Programmers soon realized that a program written in a high-level language could be run on more than one computer. Because the hardware peculiarities of a particular computer could be hidden in a compiler, rather than exposed in a language, programs could often be written in a portable language that can be run on several computers. This separation of high-level programs and computers expanded the market for commercial software and helped foster the innovative software industry.

Another advantage of compilers is that a program written in a high-level language often runs faster. Compilers, as a result of several decades of fundamental research on program analysis, code generation, and codeoptimization techniques, are generally far better at translating programs into efficient sequences of computer instructions than are human programmers. The comparison is interesting and edifying.

Programmers can occasionally produce small and ingenious pieces of machine code that run much faster than the machine instructions generated by a compiler. However, as a program grows to thousands of lines or more, a compiler’s systematic, analytical approach usually results in higher-quality translations that not only execute far more effectively but also contain fewer errors.

Program optimization is a very fertile area of computer science research. A compiler improves a program by changing the process by which it computes its result to a slightly different approach that executes faster. A compiler is allowed to make a change only if it does not affect the result that the program computes.

Interestingly, true optimization is a goal that is provably impossible. An analysis algorithm that predicts if a nontrivial modification affects a program’s result can be used to solve the program equivalence problem,

which is provably impossible because of Turing’s result (see Kleinberg and Papadimitriou in Chapter 2 ). Compilers side-step this conundrum by modifying a program only when it is possible to demonstrate that the change leaves the program’s result unaffected. Otherwise, they assume the worst and leave alone programs in which there is any doubt about the consequences of a change. The interplay between Turing’s fundamental result, which predates programming languages and compilers by many years, and the vast number of practical and effective tools for analyzing and optimizing programs is emblematic of computer science as a whole, which continues to make steady progress despite many fundamental limitations on computability.

The past half century has seen the development of thousands of programming languages that use many different approaches to writing programs. For example, some languages, so-called imperative languages, specify how a computation is to be done, while declarative languages focus on what the computer is supposed to do. Some languages are general-purpose, but many others are intended for specific application domains. For example, the languages C and C++ are commonly used in systems programming, SQL in writing queries for databases, and PostScript in describing the layout of printed material. Innovations and new applications typically produce new languages. For example, the Internet spurred development of Java for writing client/server applications and JavaScript and Flash for animating Web pages.

One might ask, “Are all of these languages necessary?” Turing’s research on the nature of computing (see Kleinberg and Papadimitriou in Chapter 2 ) offers one answer to this question. Since almost every programming language is equivalent to Turing’s universal computing machine, they are all in principle capable of expressing the same algorithms. But the choice of an inappropriate language can greatly complicate programming. It is not unlike asking whether a bicycle, car, and airplane are interchangeable modes of transportation. Just as it would be cumbersome, at best, to fly a jet to the grocery store to buy milk, so using the wrong programming language can make a program much longer and much more difficult to write and execute.

Today, most programs are written by teams of programmers. In this world, many programming problems and errors arise from misunderstandings of intent, misinformation, and human shortcomings, so language designers have come to recognize that programming languages convey information among human programmers, as well as to computers.

Language designers soon realized that programming languages must be extensible as well as computationally universal, as no one language could provide operations appropriate for all types of problems. Languages today offer many general mechanisms for programmers to use in address-

ing their specific problems. One of the early and most fundamental of these mechanisms introduced into programming languages was the “procedure,” which collects and names the code to perform a particular operation. So, for example, a programmer who wants to implement operations that involve multiplying vectors in a language in which this capability is not built in could create a procedure with a meaningful name, such as “MultiplyVector,” and simply cite that name to invoke that procedure whenever needed—as opposed to rewriting the same set of instructions each time. And programmers could then use the procedure in other programs rather than reinventing the wheel each time. Procedures of this sort have understandably become the fundamental building blocks of today’s programs.

Another early insight is built on the fact that statements in a program typically execute in one of a small number of different patterns; thus the patterns themselves could be added to the vocabulary of a language rather than relying on a programmer to express the patterns with simpler (and a larger number of) statements. For example, a common idiom is to execute a group of statements repeatedly while a condition holds true. This is written:

while (condition) do

Earlier languages did not provide this feature and instead relied on programmers to construct it, each time it was needed, from simpler statements:

if (not condition) then goto done;

The latter approach has several problems: the program is longer, the programmer’s intent is more difficult to discern, and possibilities for errors increase. For example, if the first statement said “goto test” instead of “goto done,” this piece of code would never terminate.

Incorporation of new constructs to aid in the development of more robust software systems has been a continuing major trend in programming-language development. In addition to well-structured features for controlling programs such as the “while loop,” other improvements include features that permit dividing up software into modules, strong type checking to catch some errors at compile time rather than run time, and incorpora-

tion of automated memory management that frees the programmer from worrying about details of allocating and deallocating storage. These features not only improve the ability of a programming language to express a programmer’s intent but also offer better facilities for detecting inconsistencies and other errors in programs.

Today’s huge and ever-growing software infrastructure presents an enormous challenge for programmers, software companies, and society as whole. Because programs are written by people, they contain defects known as bugs. Even the best programs, written using the most advanced software engineering techniques, contain between 10 and 10,000 errors per million lines of new code. Some defects are minor, while others have the potential to disrupt society significantly.

The constantly evolving programming languages, techniques, and tools have done much to improve the quality of software. But the software revolution is always in need of some sweetening. Programming-language researchers are devoting increasing attention to producing programs with far fewer defects and systems with much higher levels of fault tolerance. They are also developing software verification tools of greater power and rigor that can be used throughout the software development process. The ultimate research goal is to produce programming languages and software development tools with which robust software systems can be created routinely and economically for all of tomorrow’s applications.

Computer Science: Reflections on the Field, Reflections from the Field provides a concise characterization of key ideas that lie at the core of computer science (CS) research. The book offers a description of CS research recognizing the richness and diversity of the field. It brings together two dozen essays on diverse aspects of CS research, their motivation and results. By describing in accessible form computer science’s intellectual character, and by conveying a sense of its vibrancy through a set of examples, the book aims to prepare readers for what the future might hold and help to inspire CS researchers in its creation.

READ FREE ONLINE

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Abstraction in Computer Science Explained (With Examples)

Mohammad Jamiu

Mohammad Jamiu

Read Time 🕠 - 5mins

Abstraction is a fundamental concept in computer science that helps simplify complex systems, making them more manageable and easier to understand.

It allows programmers to focus on essential aspects while hiding intricate details.

In this article, we will explore the concept of abstraction in computer science, its importance, how it works, and provide real-world examples to illustrate its practical applications.

Table of Contents ↬

What is abstraction in computer science.

Abstraction, in the context of computer science, involves creating simplified models or representations of complex systems.

It aims to capture the essential functionalities and characteristics while hiding unnecessary details.

Just like a summary or a high-level overview, abstraction provides a way to grasp the fundamental concepts without getting overwhelmed by intricacies.

Why Is Abstraction Important?

Abstraction plays an important role in computer science for several reasons:

You can remember these reason with the acronym SM-REF – each letter denoting the first letter in each word.

  • Simplification : By abstracting complex systems, we can focus on the core concepts and ignore unnecessary details. This simplifies the understanding and analysis of complex problems.
  • Modularity : Abstraction enables the breaking down of a system into modular components. Each component has a specific responsibility, making it easier to understand and modify individual parts without affecting the entire system.
  • Reusability : When we abstract systems into modular components, these components can be reused in different projects. This saves time and effort, promoting code efficiency and productivity.
  • Encapsulation : Abstraction facilitates encapsulation, which is the bundling of data and related functionalities into a single unit. It provides a clear interface and hides the internal implementation, making it easier to use and maintain.
  • Focus on Essential Concepts : Abstraction allows programmers to concentrate on essential concepts and problem-solving strategies, rather than getting bogged down by low-level details.

How Does Abstraction Work?

Abstraction works by creating simplified representations of complex systems. These representations capture the key functionalities and hide unnecessary details.

It allows us to interact with the system at a higher level of abstraction, using well-defined interfaces and operations.

For example, when designing a car simulation in a video game, we create an abstraction called “Car” that encapsulates the behaviors and characteristics of a car, such as accelerating, braking, and steering.

We can interact with the “Car” object without needing to know the intricate details of the physics engine or the underlying algorithms.

Examples of Abstraction

Abstraction is prevalent in various aspects of computer science. Here are a few examples:

  • Graphical User Interfaces (GUI) : GUIs provide an abstraction layer that simplifies user interaction with complex software applications. Users can interact with buttons, menus, and icons without needing to understand the underlying code.
  • Operating Systems : Operating systems abstract the hardware components and provide a simplified interface for software applications. They handle resource management, process scheduling, and device drivers, shielding the application developers from low-level hardware details.
  • Programming Languages : Programming languages offer abstractions that simplify the process of writing code. They provide higher-level constructs and syntax, allowing developers to express their ideas and solve problems without worrying about low-level machine instructions.
  • Database Systems : Database systems abstract the complexities of data storage and retrieval. They provide query languages, such as SQL, that allow users to interact with databases using simple and intuitive commands.
  • Networking Protocols : Networking protocols, such as TCP/IP, abstract the complexities of transmitting data over networks. They handle packet routing, error detection, and data fragmentation, providing a reliable and simplified communication interface.

These examples highlight how abstraction simplifies complex systems, making them accessible and easier to work with.

Abstraction is a powerful concept in computer science that simplifies complex systems, promotes code efficiency, and enhances the overall development process. By creating simplified models, programmers can focus on essential aspects while ignoring unnecessary details.

Abstraction is a fundamental skill for aspiring programmers and computer scientists, enabling them to design and develop robust and scalable solutions.

Comprehending what abstraction is all about allows us to tackle complex problems with clarity and efficiency, making it an essential concept in the field of computer science.

FAQs about Abstraction in Computer Science

  • Why is abstraction important in computer science? Abstraction is important in computer science because it simplifies complex systems, promotes code reusability, enhances modularity, and allows developers to focus on essential concepts.
  • How does abstraction improve code quality? Abstraction improves code quality by simplifying the understanding of complex systems, promoting modular and reusable code, and allowing for easier maintenance and updates.
  • What are some real-world examples of abstraction in computer science? Real-world examples of abstraction in computer science include graphical user interfaces, operating systems, programming languages, database systems, and networking protocols.
  • Can beginners understand abstraction in computer science? Yes, beginners can understand abstraction in computer science by thinking of it as a way to simplify complex systems and focus on the important parts while hiding the unnecessary details.
  • How can abstraction be applied in programming? Abstraction can be applied in programming by creating modular components, using well-defined interfaces, and encapsulating functionalities. It allows for easier code maintenance, reusability, and scalability.
  • What is the difference between abstraction and encapsulation? Abstraction and encapsulation are related concepts. Abstraction focuses on simplifying complex systems, while encapsulation bundles data and related behaviors into a single unit. Encapsulation is a means to achieve abstraction.

More For You ☄

abstract representation computer science

Ada Computer Science

You need to enable JavaScript to access Ada Computer Science.

Book cover

International Symposium on Abstraction, Reformulation, and Approximation

SARA 2005: Abstraction, Reformulation and Approximation pp 351 Cite as

Abstract Representation in Painting and Computing

  • Robert Zimmer 20  
  • Conference paper

997 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 3607))

This paper brings together two strands of my research: an interest in abstraction in AI computing systems (see for example [1]) and an interest in the study of paintings as a key to understanding perception and cognition (see, for example, [2]). Our senses of the world are informed by the art we make and by the art we inherit and value, works that in them selves encode others’ worldviews. This two-way effect is deeply rooted and art encodes and a.ects both a culture’s ways of perceiving the world and its ways of remaking the world it perceives. The purpose of this paper is to indicate ways in which a study of abstraction in art can be used to discover insights into our perception of the world and how these insights may be employed, in turn, to develop computing systems that can take advantage of some of these forms of abstraction both in their own processing and in the way they present themselves to users.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Holte, R.C., Mkadmi, T., Zimmer, R.M., MacDonald, A.J.: Speeding Up Problem-Solving by Abstraction: A Graph-Oriented Approach. Artificial Intelligence 85, 321–361 (1996)

Article   Google Scholar  

Zimmer, R.: Abstraction in Art with Implications for Perception. Philosophical Transactions of the Royal Society B (June 2003)

Google Scholar  

Download references

Author information

Authors and affiliations.

Goldsmiths Digital Studios, Goldsmiths College University of London, New Cross, London, SE14 6NW, UK

Robert Zimmer

You can also search for this author in PubMed   Google Scholar

Editor information

Editors and affiliations.

UR 079 GEODES, IRD, 32 avenue Henri Varagnat, 93143, Bondy, France

Jean-Daniel Zucker

Dip. di Informatica, Università del Piemonte Orientale, Via Bellini 25/G, 15100, Alessandria, Italy

Lorenza Saitta

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper.

Zimmer, R. (2005). Abstract Representation in Painting and Computing. In: Zucker, JD., Saitta, L. (eds) Abstraction, Reformulation and Approximation. SARA 2005. Lecture Notes in Computer Science(), vol 3607. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11527862_28

Download citation

DOI : https://doi.org/10.1007/11527862_28

Publisher Name : Springer, Berlin, Heidelberg

Print ISBN : 978-3-540-27872-6

Online ISBN : 978-3-540-31882-8

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

Title: probabilistic directed distance fields for ray-based shape representations.

Abstract: In modern computer vision, the optimal representation of 3D shape continues to be task-dependent. One fundamental operation applied to such representations is differentiable rendering, as it enables inverse graphics approaches in learning frameworks. Standard explicit shape representations (voxels, point clouds, or meshes) are often easily rendered, but can suffer from limited geometric fidelity, among other issues. On the other hand, implicit representations (occupancy, distance, or radiance fields) preserve greater fidelity, but suffer from complex or inefficient rendering processes, limiting scalability. In this work, we devise Directed Distance Fields (DDFs), a novel neural shape representation that builds upon classical distance fields. The fundamental operation in a DDF maps an oriented point (position and direction) to surface visibility and depth. This enables efficient differentiable rendering, obtaining depth with a single forward pass per pixel, as well as differential geometric quantity extraction (e.g., surface normals), with only additional backward passes. Using probabilistic DDFs (PDDFs), we show how to model inherent discontinuities in the underlying field. We then apply DDFs to several applications, including single-shape fitting, generative modelling, and single-image 3D reconstruction, showcasing strong performance with simple architectural components via the versatility of our representation. Finally, since the dimensionality of DDFs permits view-dependent geometric artifacts, we conduct a theoretical investigation of the constraints necessary for view consistency. We find a small set of field properties that are sufficient to guarantee a DDF is consistent, without knowing, for instance, which shape the field is expressing.

Submission history

Access paper:.

  • Other Formats

license icon

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

A blueprint for making quantum computers easier to program

Press contact :.

Stylized drawing of a computer monitor with a black screen, surrounded by green beams of light and a completed task list on each side. Behind these objects are two IBM quantum computers, shown as cylinders connected to wires

Previous image Next image

When MIT professor and now Computer Science and Artificial Intelligence Laboratory (CSAIL) member Peter Shor first demonstrated the potential of quantum computers to solve problems faster than classical ones, he inspired scientists to imagine countless possibilities for the emerging technology. Thirty years later, though, the quantum edge remains a peak not yet reached. Unfortunately, the technology of quantum computing isn’t fully operational yet. One major challenge lies in translating quantum algorithms from abstract mathematical concepts into concrete code that can run on a quantum computer. Whereas programmers for regular computers have access to myriad languages such as Python and C++ with constructs that align with standard classical computing abstractions, quantum programmers have no such luxury; few quantum programming languages exist today, and they are comparatively difficult to use because quantum computing abstractions are still in flux. In their recent work, MIT researchers highlight that this disparity exists because quantum computers don’t follow the same rules for how to complete each step of a program in order — an essential process for all computers called control flow — and present a new abstract model for a quantum computer that could be easier to program.

In a paper soon to be presented at the ACM Conference on Object-oriented Programming, Systems, Languages, and Applications, the group outlines a new conceptual model for a quantum computer, called a quantum control machine, that could bring us closer to making programs as easy to write as those for regular classical computers. Such an achievement would help turbocharge tasks that are impossible for regular computers to efficiently complete, like factoring large numbers, retrieving information in databases, and simulating how molecules interact for drug discoveries. “Our work presents the principles that govern how you can and cannot correctly program a quantum computer,” says lead author and CSAIL PhD student Charles Yuan SM ’22. “One of these laws implies that if you try to program a quantum computer using the same basic instructions as a regular classical computer, you’ll end up turning that quantum computer into a classical computer and lose its performance advantage. These laws explain why quantum programming languages are tricky to design and point us to a way to make them better.” Old school vs. new school computing One reason why classical computers are relatively easier to program today is that their control flow is fairly straightforward. The basic ingredients of a classical computer are simple: binary digits or bits, a simple collection of zeros and ones. These ingredients assemble into the instructions and components of the computer’s architecture. One important component is the program counter, which locates the next instruction in a program much like a chef following a recipe, by recalling the next direction from memory. As the algorithm sequentially navigates through the program, a control flow instruction called a conditional jump updates the program counter to make the computer either advance forward to the next instruction or deviate from its current steps. By contrast, the basic ingredient of a quantum computer is a qubit, which is a quantum version of a bit. This quantum data exists in a state of zero and one at the same time, known as a superposition. Building on this idea, a quantum algorithm can choose to execute a superposition of two instructions at the same time — a concept called quantum control flow.

The problem is that existing designs of quantum computers don’t include an equivalent of the program counter or a conditional jump. In practice, that means programmers typically implement control flow by manually arranging logical gates that describe the computer’s hardware, which is a tedious and error-prone procedure. To provide these features and close the gap with classical computers, Yuan and his coauthors created the quantum control machine — an instruction set for a quantum computer that works like the classical idea of a virtual machine. In their paper, the researchers envision how programmers could use this instruction set to implement quantum algorithms for problems such as factoring numbers and simulating chemical interactions.

As the technical crux of this work, the researchers prove that a quantum computer cannot support the same conditional jump instruction as a classical computer, and show how to modify it to work correctly on a quantum computer. Specifically, the quantum control machine features instructions that are all reversible — they can run both forward and backward in time. A quantum algorithm needs all instructions, including those for control flow, to be reversible so that it can process quantum information without accidentally destroying its superposition and producing a wrong answer.

The hidden simplicity of quantum computers According to Yuan, you don’t need to be a physicist or mathematician to understand how this  futuristic technology works. Quantum computers don’t necessarily have to be arcane machines, he says, that require scary equations to understand. With the quantum control machine, the CSAIL team aims to lower the barrier to entry for people to interact with a quantum computer by raising the unfamiliar concept of quantum control flow to a level that mirrors the familiar concept of control flow in classical computers. By highlighting the dos and don’ts of building and programming quantum computers, they hope to educate people outside of the field about the power of quantum technology and its ultimate limits.

Still, the researchers caution that as is the case for many other designs, it’s not yet possible to directly turn their work into a practical hardware quantum computer due to the limitations of today’s qubit technology. Their goal is to develop ways of implementing more kinds of quantum algorithms as programs that make efficient use of a limited number of qubits and logic gates. Doing so would bring us closer to running these algorithms on the quantum computers that could come online in the near future.

“The fundamental capabilities of models of quantum computation has been a central discussion in quantum computation theory since its inception,” says MIT-IBM Watson AI Lab researcher Patrick Rall, who was not involved in the paper. “Among the earliest of these models are quantum Turing machines which are capable of quantum control flow. However, the field has largely moved on to the simpler and more convenient circuit model, for which quantum lacks control flow. Yuan, Villanyi, and Carbin successfully capture the underlying reason for this transition using the perspective of programming languages. While control flow is central to our understanding of classical computation, quantum is completely different! I expect this observation to be critical for the design of modern quantum software frameworks as hardware platforms become more mature.” The paper lists two additional CSAIL members as authors: PhD student Ági Villányi ’21 and Associate Professor Michael Carbin. Their work was supported, in part, by the National Science Foundation and the Sloan Foundation.

Share this news article on:

Related links.

  • Michael Carbin
  • Charles Yuan
  • Computer Science and Artificial Intelligence Laboratory (CSAIL)
  • Department of Electrical Engineering and Computer Science

Related Topics

  • Electrical Engineering & Computer Science (eecs)
  • MIT-IBM Watson AI Lab
  • Computer science and technology
  • Quantum computing
  • Programming

Related Articles

Superconducting qubit architecture resembling a cross, has blue “T” in center and four squares on longer ends.

New qubit circuit enables quantum operations with higher accuracy

closeup of lab equipment under green light showing a rectangular sensor with cables

Canceling noise to improve quantum devices

Close-up photo of IBM’s quantum computer, an elaborate maze of golden wires and components

A new language for quantum computing

Previous item Next item

More MIT News

Headshots of three female students and one male student in a 4-square photo collage.

New major crosses disciplines to address climate change

Read full story →

A collage of portraits of the four individuals.

Four MIT faculty named 2023 AAAS Fellows

Blue word balloons in the background are like anonymous tweets. Two green balloons stand out. In the center, orange balloons state that “this post is endorsed by someone published before at CHI and has over 1000 citations” and “@Rita has relevant experience on this topic.”

For more open and equitable public discussions on social media, try “meronymity”

Headshot of Erin Kara with out-of-focus galaxies in the background

Erin Kara named Edgerton Award winner

Standing outdoors in front of a large orange structure, a man shows a woman a square, handheld stitching machine.

Q&A: Claire Walsh on how J-PAL’s King Climate Action Initiative tackles the twin climate and poverty crises

Collage of 10 grayscale headshots on a frame labeled “HBCU Science Journalism Fellowship"

Knight Science Journalism Program launches HBCU Science Journalism Fellowship

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

COMMENTS

  1. Abstraction (computer science)

    Abstraction is a fundamental concept in computer science and software engineering, especially within the object-oriented programming paradigm. Examples of this include: the usage of abstract data types to separate usage from working representations of data within programs;

  2. Understanding Abstraction in Computer Science: A Key Concept for

    The code demonstrates an example of abstraction in computer science using Python. Here's how it rolls out: It starts with importing ABC and abstractmethod from the abc module, which provides the infrastructure for defining abstract base classes (ABCs) in Python. This is pivotal for creating a framework that implements abstraction.

  3. 4 Abstraction, Representation, and Notations

    Models capture phenomena—of the world or of the imagination—in such a way that a general-purpose computer can emulate, simulate, or create the phenomena.But the models are usually not obvious. The real world is complex and nonlinear, there's too much detail to deal with, and relationships among the details are often hidden.

  4. Abstraction in Computer Science Explained (With Examples)

    Examples of Abstraction. Abstraction is prevalent in various aspects of computer science. Here are a few examples: Graphical User Interfaces (GUI): GUIs provide an abstraction layer that simplifies user interaction with complex software applications. Users can interact with buttons, menus, and icons without needing to understand the underlying ...

  5. Abstract Representation

    The Turing machine is an abstract representation of a computer introduced by Turing in 1936 to give a precise definition to the concept of the algorithm. It is still widely used in computer science, primarily in proofs of computability and computational tractability. Turing imagined a mechanical device that moved along an infinite length of ...

  6. What Is Abstraction in Computer Science? With Types and FAQs

    Data abstraction is an element of computer languages that allows programmers to understand the code and communicate with the hardware. An example of data abstraction is the data type "string" used in simple computer programming languages to indicate a text string. Simplifying concepts like this allows programmers to communicate with the system ...

  7. PDF Computer Science: Abstraction to Implementation

    of being difficult to understand. In fact, the use we make of the term of abstract is to simplify, or eliminate irrelevant detail, as in the abstract of a published paper, which states the key ideas without details. In computer science, an abstraction is an intellectual device to simplify by eliminating factors that are irrelevant to the key idea.

  8. PDF Computer Science: The Mechanization of Abstraction

    Computer Science: The Mechanization of Abstraction ... ition that abstract things are hard to understand; for example, abstract algebra (the study of groups, rings, and the like) is generally considered harder than the ... representation We have devised abstractions that can be used to help build programs that do

  9. PDF Abstraction in Computer Science Education: An Overview

    1. To capture different characterisations of abstraction that are relevant to computer . science educators at any level, including teachers with limited expertise. 2. To outline examples and contexts for such characterisations in different computer science areas. 3. To review and provide pointers to key papers addressing the cognitive and peda-

  10. Ada Computer Science

    Abstraction means removing unnecessary detail. Abstraction is widely used to simplify things that may be very complex. Humans use abstractions all the time, almost without thinking. For example, if you learn to drive, you will be taught that putting your foot on the accelerator will speed up the car and putting your foot on the brake pedal will ...

  11. PDF Advanced Notes

    Abstraction is one of the most important principles in Computer Science and is a critical part of computational thinking. It is the process of removing excessive details to arrive at a representation of a problem that consists of only the key features. Abstraction often involves analysing what is relevant to a given scenario and simplifying a ...

  12. Abstract syntax tree

    An abstract syntax tree (AST) is a data structure used in computer science to represent the structure of a program or code snippet. It is a tree representation of the abstract syntactic structure of text (often source code) written in a formal language.Each node of the tree denotes a construct occurring in the text. It is sometimes called just a syntax tree.

  13. Computers in Abstraction/Representation Theory

    Of especial interest are its pragmatic value as a tool for framing questions and facilitating progress in computer science, and its comparative evaluation with other accounts of computation, such as the mechanistic account ... By contrast, AR theory places no such restriction on the form of a theory's abstract representation of a physical ...

  14. Abstraction/Representation Theory for heterotic physical computing

    In this paper, we give a formal treatment of the theory in the context of the foundations of computer science, centring on the representation relation between data-processing physical systems and the abstract objects of standard computing theory. AR theory comes equipped with both an algebraic and a graphical language of commuting diagrams, and ...

  15. What is abstraction?

    Abstraction is the process of filtering out - ignoring - the characteristics of patterns that we don't need in order to concentrate on those that we do. It is also the filtering out of specific ...

  16. Graph (abstract data type)

    A directed graph with three vertices (blue circles) and three edges (black arrows).. In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics.. A graph data structure consists of a finite (and possibly mutable) set of vertices (also called nodes or points), together with a ...

  17. Abstract Representation in Painting and Computing

    Abstract. This paper brings together two strands of my research: an interest in abstraction in AI computing systems (see for example [1]) and an interest in the study of paintings as a key to understanding perception and cognition (see, for example, [2]). Our senses of the world are informed by the art we make and by the art we inherit and ...

  18. Abstraction/Representation Theory and the Natural Science of

    Computer science, traditionally dealing only with issues of abstract computation, has tended to ignore this question. The lack of formal connection between physical device and abstract theory has given rise both to this type of foundational quandary, and also to difficulties determining the capabilities of new unconventional computing technologies.

  19. Computer Science

    Covers models of computation, complexity classes, structural complexity, complexity tradeoffs, upper and lower bounds. Roughly includes material in ACM Subject Classes F.1 (computation by abstract devices), F.2.3 (tradeoffs among complexity measures), and F.4.3 (formal languages), although some material in formal languages may be more appropriate for Logic in Computer Science.

  20. Neural scene representation and rendering

    Scene representation—the process of converting visual sensory data into concise descriptions—is a requirement for intelligent behavior. Recent work has shown that neural networks excel at this task when provided with large, labeled datasets. However, removing the reliance on human labeling remains an important open problem.

  21. [2404.08403] Learning representations of learning representations

    The ICLR conference is unique among the top machine learning conferences in that all submitted papers are openly available. Here we present the ICLR dataset consisting of abstracts of all 24 thousand ICLR submissions from 2017-2024 with meta-data, decision scores, and custom keyword-based labels. We find that on this dataset, bag-of-words representation outperforms most dedicated sentence ...

  22. Sleptsov nets are Turing-complete

    Department of Computer Science, Darmstadt University of Technology, Karolinenplatz 5, 64289 Darmstadt, Germany. ... Abstract . It is known that a Sleptsov net, with multiple firing of a transition at a step, runs exponentially faster than a Petri net, opening prospects for its application as a graphical language of concurrent programming. ...

  23. Abstraction/Representation Theoryforheterotic physicalcomputing

    abstract semantics given by AbstractInterpretation. By grounding computer science in the physical world, questions of ontology and semantics for computational formal systems can also be addressed, and we will focus in particular on lambda calculi in this regard. In the presence of AR theory, computer science becomes the natural science of the ...

  24. Stack (abstract data type)

    Stack (abstract data type) Similarly to a stack of plates, adding or removing is only possible at the top. Simple representation of a stack runtime with push and pop operations. In computer science, a stack is an abstract data type that serves as a collection of elements with two main operations: Push, which adds an element to the collection, and.

  25. Optimization of Prompt Learning via Multi-Knowledge Representation for

    Vision-Language Models (VLMs), such as CLIP, play a foundational role in various cross-modal applications. To fully leverage VLMs' potential in adapting to downstream tasks, context optimization methods like Prompt Tuning are essential. However, one key limitation is the lack of diversity in prompt templates, whether they are hand-crafted or learned through additional modules. This limitation ...

  26. CFlow: Supporting Semantic Flow Analysis of Students' Code in

    Abstract. The high demand for computer science education has led to high enrollments, with thousands of students in many introductory courses. In such large courses, it can be overwhelmingly difficult for instructors to understand class-wide problem-solving patterns or issues, which is crucial for improving instruction and addressing important pedagogical challenges.

  27. Probabilistic Directed Distance Fields for Ray-Based Shape Representations

    In modern computer vision, the optimal representation of 3D shape continues to be task-dependent. One fundamental operation applied to such representations is differentiable rendering, as it enables inverse graphics approaches in learning frameworks. Standard explicit shape representations (voxels, point clouds, or meshes) are often easily rendered, but can suffer from limited geometric ...

  28. Keynote I: : Anticipatory Radio Resource Management for 5G Networks and

    Abstract. Hyperbolic spaces have recently achieved acceleration in the context of machine learning of their high capacity and tree-likeliness structures, taxonomies, text, and graphs. ... Keynote lecture: fluctuation free matrix representation in expectation value dynamical issues and their applications. ... Procedia Computer Science Volume 231 ...

  29. Tree (data structure)

    In computer science, a tree is a widely used abstract data type that represents a hierarchical tree structure with a set of connected nodes. Each node in the tree can be connected to many children (depending on the type of tree), but must be connected to exactly one parent, [1] except for the root node, which has no parent (i.e., the root node ...

  30. A blueprint for making quantum computers easier to program

    When MIT professor and now Computer Science and Artificial Intelligence Laboratory (CSAIL) member Peter Shor first demonstrated the potential of quantum computers to solve problems faster than classical ones, he inspired scientists to imagine countless possibilities for the emerging technology. Thirty years later, though, the quantum edge remains a peak not yet reached.