The Dull Definition
If we look for a definition of computer programming, we will find variations of roughly the same thing: “programming is a process that leads from an original formulation of a computing problem to executable programs”. That’s Wikipedia’s definition. It doesn’t really explain much, does it? To understand what computer programming really is, we need to first look at what a program is and how it makes the computer do something. Let’s use an analogy.
Can I Get A Cup Of Coffee?
Consider the following actions:
- Stand up
- Walk to the machine
- Press the Add Sugar button
- Press the Coffee button
- Wait for the machine to fill the paper cup
- Take the cup
This short list of actions is actually an algorithm. An algorithm is a fancy mathematical term that describes a finite list of well-defined instructions. If you were a computer following this list, we would say that you were running a program. The written list of instructions itself is called source code.
Wait! That’s a whole lot of new material. Let’s recap, shall we?
- Source code: the sheet of paper (or the computer file) containing the written instructions
- Algorithm: the grouping of the instructions that one must take to achieve something
- Program: the result of one or more source code files containing one or more algorithms
In this particular example, we used English as the language for the source code. we usually speak French (well, Nick does), and we had to translate these instructions to a language that you would understand. That’s exactly what computer programming is: you translate instructions to a language a computer can understand using a programming language. Your computer doesn’t speak English. It speaks computer. And you’re the translator.
More Than One Tongue
Just as several human languages exist (English, French, Italian, Spanish), the same holds true in the computer world, with regard to programming languages. And much like the languages we know and speak every day, each programming language has its own grammatical rules and idioms. In the end, though, they all achieve the same thing: the communication of your instructions to a computer, which then executes those instructions accordingly. Much like speaking English in England and French in France, you would choose one programming language over another based on its strengths and weaknesses for the particular task at hand. Some languages are better suited for the creation of games, some for web programming, and others for mobile applications. Some companies push or even force a particular language on you for building software for their platform. This is the case with Microsoft and the C# language or Apple with Objective-C and Swift.
Even though we can use different programming languages, your computer actually understands only one: binary code. It’s not an expansive language, it only consists of two characters: 0 and 1. For example, the letter
A would be
01000001 in binary code. This is because ultimately, a processor, the component in a computer that does…well…the processing of instructions contained in a program, can only have two states: electricity passing through it (
1) or no electricity (
0). Even if you are not familiar with what a processor is, you still may have heard about Intel, the company building the processors for the Mac computers or you may have heard about the A8 processors designed by Apple and commonly found in iPhones and iPads.
Writing programs using binary code as the programming language would be a tedious task (even though that’s exactly what the early programmers did using punched cards: a hole for a 1 and no hole for a 0!). This is precisely why higher-level languages were created. BASIC, C, Java, Ruby, Python, PHP, the brand new Swift and many more. By using a programming language that resembles human languages (usually English), programmers can be way more productive. Whichever programming language we choose, though, its source code will ultimately be translated into binary code by a compiler or an interpreter.
Compiled Or Interpreted
Although they’re used for the same purpose (writing computer programs!), there are two distinct families of programming languages. Let’s meet them.
Once a program is written using a compiled language, the source code needs to pass through a compiler that will translate it into binary code and output what is known as the executable. This executable, the result of the compilation, is what you then run to use the program. Running a program is what you do when you tap an icon on an iPhone, for example. A compiler itself is a special program whose purpose is to translate source code to binary code, thus creating programs (mind blowing, we know).
A compiler for the particular language you want to use must be present on your machine. Some of them come preinstalled. For example, if you want to write code in Swift, you must first install a compiler known as LLVM on your computer. When you’ll need a compiler, we’ll walk you through the steps of installing one.
The upside of compiled languages is that they only need to be translated (or compiled, to use the proper term) once to produce a binary executable that the computer can understand. Programs written with them are quicker to execute. The downside is that they’re not easily portable. A program compiled for Microsoft Windows will run on Microsoft Windows only. A program compiled for Mac will only run on a Mac.
Examples of compiled languages include: C, C++, Objective-C, COBOL and Swift.
A program written using an interpreted language also gets translated to binary code, but it is done by an interpreter each time the program is run. To execute such a program, you first need to install the correct interpreter on the machine, on which you wish to run the program. If the program is written in Java, you need to install the JVM (Java Virtual Machine), for Ruby, you need the MRI (Matz’s Ruby Interpreter). There’s a specific interpreter for each interpreted language.
One compelling reason to use an interpreted language is that in most cases, you’ll find an interpreter for its source code targeting multiple platforms. For example, the same Java code can be run on Microsoft Windows, Mac, Linux. This does come at a cost: the speed of execution is slower (sometimes, much slower) than programs written using a compiled language because they need to be translated each time from source code to binary code by the interpreter before the computer can run them.
Examples of interpreted languages include Java, Python, Ruby, PHP.
Cool! Now you know what programming is: writing instructions in source code files as collections of algorithms, using a programming language. Try saying that three times fast! The source code will ultimately result in an interpreted or compiled computer program.
What else do you now know? That computers can only understand binary code and that programming languages were created as an intermediary between human languages and binary code.
We hope this lesson has demystified what programming actually is. In the next lesson, we’ll pretend you’re a computer. It’s going to be fun.
- Programming language
- Source code
- Binary code
Before we continue, let’s make sure we understand that learning to program is no easy feat. With hard work and dedication, learning to code is an entirely achievable goal, and an extremely rewarding one, too. Anybody boasting the “learn to code in a week/month” trope is not painting the full picture. Coding is a life-long journey of learning. And an exciting one at that!
Our goal is to teach you high quality programming, so in this exercise, here’s what we want you to do: reflect on why, exactly, you want to learn how to code. Put it into words and use these to introduce yourself to your fellow students using the Forums. Learning can be a solitary endeavour, but it doesn’t have to be. We encourage you to connect with those who have chosen to travel the same path.
Oh, and Welcome!