Part of a series of posts explaining programming for the lay-person.
This is part of a series of posts; see the introduction to this series. This post builds on the idea of types and extends yesterday’s discussion of user-defined types and contiguity.
Computer Scientists recognize that there are many ways to achieve computation, and that any task one technique can do perform any other technique may perform as well. I talked a little bit about that in the opening of my post on state and expect to get into it in more detail once I conclude my unlocking programming series and begin delving into the mysteries of computational theory. Computer Engineers, however, have long known that silicon transistors are the way to go and have developed a very consistent model of how to assemble them.
Today’s computers are laid out with two key elements: processors and memory. The processors do things to stuff stored in memory. In practice there are caches and buses and specialized processors and storage and peripherals galore, but they all serve as “supporting cast”, not the stars.
Memory is laid out as a huge number of little numbered bins called bytes, where each byte can hold one of 256 values. If you have, say, a 4 gigabyte memory that means there are 4 × 230 = 4,294,967,296 of these little bins. Meaning there are a total of 21,099,511,627,776 or almost 10330,985,980,542 possible configurations of a 4GB memory. To access all this memory, the computer stores addresses in groups of 4 (for 32-bit architecture) or 8 (for 64-bit architecture) consecutive bins. Hence, a 32-bit computer can’t use more than 4GB memory since that’s the biggest number that fits in four bins. A 64-bit computer can handle up to 16EB. That is the only difference between the two. Together, these bins store the number of another bin.
There’s really nothing fancier than this going on. In a (group of) bin(s) I might find
We haven’t talked about instructions being in memory before, nor am I likely to come back to it. Most of the time you never need to worry about this; programming languages are designed (in part) to keep the instructions and data separate and to hide the details of the instructions from the programmer. It is how many viruses work, though: they write instructions where you were supposed to have data and then take advantage of some glitch in the program to fool it into running the new instructions instead of its old ones.
Yesterday I mentioned one way of making lists: put a lot of values next to each other in an array. Another way to make a list is to have a record type that contains one value and the number of another record. Thus, the first record has only one element from the list, but it also has a pointer to where the next element can be found. As we keep following one pointer after another we explore an entire linked list. Address, pointer, and link mean the same thing, but are used in slightly different contexts. “Address” is used to mean the bin number where a thing is stored. “Pointer” is used to mean a bin that stores another bin’s number. “Linked” is used to describe records connected by pointers.
When a number of individual records point to one another, we call the entire collection a linked data structure and refer to each of its constituent records as nodes. Unlike arrays, there’s no need for any special type for data structures: to the computer they are just records that happen to store pointers. It’s how we use them that makes the special.
Besides lists, the most common linked data structure is a tree. If each node contains pointers to two or more other nodes then we have a structure that “branches” like a tree. Trees turn out to have all sorts of interesting and useful properties and show up a lot in programming.
As a novice programmer, you’ll likely never need to think about pointers and links at all. Most modern languages provide nicely-packaged versions of every data structure you’re likely to need, and newer languages make the existence of pointers mostly invisible. Then one day you’ll be programming merrily along and your program will do Strange and Unexpected Things.
After some frustration you’ll discover one of two things. Either changing X in subroutine baz inexplicably changed Y in subroutine flub as well, or your subroutine that was supposed to change Z didn’t do anything to it at all. Either way, the problem you are facing is one of aliasing: can two names refer to the same value?
Humans are not consistent beings. We rely on context to understand each other and we usually aren’t even aware that that context exists. If I ask you to find the square root of x, you understand from context that I don’t want you to change x itself, just to give me the value of its square root. If I ask you to swap x and y you understand from context that I do want you to change x and y, not just to give me two numbers back without changing anything.
Computers don’t have this context (and programmers are bad at making it explicit). Programming language designers have to decide “when someone invokes a subroutine, do I give the subroutine pointers to the arguments so it can change them or do I give it copies of the arguments so that changing them doesn’t change the original?” There are other options too, the so-called “call by”s or evaluation strategies: call by value-result, call by name, lazy call by X, etc. None avoid the problem discussed here. Either way, some people are frustrated, either when their subroutine fails to do anything or when it does something it was not supposed to do.
These are the sorts of nuances that make many programmers drift toward overly-literal puns. You learn, as a programmer, to think about what you said, not just what you meant, and about context that makes the statements meaningful; for some people, this starts to spill over into daily life.
Looking for comments…