Friday, August 28, 2015

Some Pointers

The C programming language has a lot of neat features.  It simultaneously allows you to write programs at a fairly high level of abstraction and access the machine at the lowest level, just like assembly language.  The term "high level assembly language" has been used to describe it.  Programmers don't like it when the language gets in the way of what they are trying to do.  C rarely gets in the way of anything low-level, unlike most other high-level languages.  In fact, it rarely gets in the way of much of anything.  C allows you to do about anything you want.  Modern ANSI standard C has made it a bit more difficult, but safer, to do those things.  But the flavor is still there.  Being able to do anything you want will quickly get you into trouble.  ANSI C still allows it, but posts warning signs that you must acknowledge first.  Ignore them at your own risk.  This blog is about what happens when you ignore them without fully thinking that through.

Earlier this week I was asked to review some embedded interrupt code.  Code reviews are a mixed blessing: they are extremely useful and important, but you end up telling others how to do their job.  You can also pick up some new techniques by reviewing others' code, too, which is good.  Enough about code reviews.  Perhaps a later blog about them.  But this particular review prompted me to write this post.  

A "pointer" in a programming language is simply a variable that holds, instead of a useful value, an address of a piece of memory that (usually) holds the useful value.  You might need to think about that for a few seconds.  Pointers are one of the hardest things for many people to grasp.  A variable is normally associated with some value of some data type.  For instance in C:
int x;
declares a variable that will hold an integer. Simple enough.  Or this one, which declares an array of ten double values and puts those values into it:
double arr[10] = { 0.0, 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8, 9.9};
By using the name of the variable, you get the value associated with it:
x = 5;
y = x;   // y now has the value 5 copied from x
But pointers are different.  A processor has a bunch of memory that can hold anything.  Think of the memory as an array of bytes.  That memory has no defined data type: it is up to the program to determine what value of what type goes where.  A pointer is an index of that array.  The pointer just "points into" the array of raw memory somewhere.  It is up to the programmer to decide what kind of value gets put into or taken out of that memory location.  This concept pretty much flies in the face of good programming as we have learned it over the last 50 years.  It breaks the abstraction our high level language is designed to give us in the interest of good, safe programming. Having a data type associated with a variable is important to good programming.  It gives the compiler the chance to check the data types used to make sure what you are doing makes sense.  If you have this code:
double d = 3.1415926538;
int i = d;
The compiler won't let you do that.  It knows you have a double value in d which cannot be assigned to an integer variable.  Allowing that would probably cause a nasty bug that would be difficult to find.

Pointers are extremely powerful and useful.  But they are very dangerous.  Almost every language has pointers in one form or another.  But most of the "improvements" over C have been to make pointers safer.  Java and C# claim to not have pointers and replace them with "references" which are really just a safer, more abstract form of pointers under the hood.  Even C has tried to make pointers safer.  The ANSI standard added some more type checking features to C that had been missing.  In C, you declare a pointer like this:
int *p_x;   // declare a pointer to an integer
The "*" means "pointer to."  The variable name, p_x, holds a pointer to an integer: just the address of where the integer is stored.  If you assign the variable to another, you get a copy of the address, not the value:
p_y = p_x;   // p_y now holds the same address of an integer, not an integer.
To get the value "pointed to" you have to "dereference" the pointer, but using the same "*" operator:
int x = *p_x;   // get the value pointed to by p_x into x
The p_x tells the compiler "here is an address of an integer" and the "*" tells it to get the integer stored there.  What we are doing is assigning a data type of "pointer to int" to the pointer.  The following code won't compile:
double *p_d;
int *p_i;  
p_i = p_d; // assign address of double to address of int: ERROR
The compiler sees that you are trying to assign the address of a double value to the pointer declared to point to an int.  That is an error.  It gives us a little safety net under our pointers.  There are plenty of circumstances this won't help, but it is a start.

However, C doesn't want to get in our way.  From the start, C allowed the programmer to do about anything.  ANSI C didn't change that much, but did make it necessary to tell the compiler "I know what I am doing here is dangerous, but do it anyway."  Sometimes we want to have a pointer to raw memory without having a data type attached to that memory.  In C we can do that.  ANSI C defined the "void" data type for just such things.  The void data type means, essentially, no data type, just raw memory:
void * p_v;  // declare a "void pointer" that points to raw memory
This can be quite handy, like a stick of dynamite.

You can assign any type of pointer to or from a void pointer, but the rules are a bit strange.  As dangerous as pointers are, void pointers are even more so.  Once a pointer is declared as void, the compiler has NO control over what it points to, and so can't check anything you do with that pointer!  If there were no restrictions on assigning void pointers we could do this:
double *p_d;     // declare a pointer to a double
*p_d = 3.1415;   // put 3.1415 into the memory pointed to by p_d
void *p_v;          // declare a pointer to void
p_v = p_d;        // assign address of double in p_d to address of void in p_v
int *p_i;            // declare p_i as a pointer to an integer
*p_i = *p_v;     // copy the value pointed to by p_v (a double!) to the integer pointed to by p_i
The compiler couldn't stop us and has no way of checking that the values make sense.  It will cheerfully copy the memory pointed to by p_d and p_v into the memory pointed to by p_i, which should be an integer.  Since floating point (double) values and integer values are stored in different formats, that will NOT be what we want!  We now have a nasty bug. Fortunately, the designers of C made it so the above won't compile.  The void pointer must be "cast" to a pointer of the appropriate type before the assignment can be made.  Tell the compiler "I know what I'm doing" if that is really what you want to do.

But we have only scratched the surface.  Pointers point to memory.  What all can be in memory?  Well, just about anything.  Where does the computer hold your code?  Yep, in memory!

Your code is held in memory, just like your data.  A pointer points to memory.  So a pointer can point to code.  Not all languages give you the opportunity to take advantage of that, but of course C does.
And, of course, it comes with a long list of dangers.  You can declare a "pointer to a function" that can then be "dereferenced" to call the function pointed to.  That is incredibly powerful!  You can now have a (pointer) variable that points to any function desired and call whatever function it is assigned.  One example, which is used in the C standard library, is a sorting function.  YOu can write the "world's best" sort function, but it normally will only sort one data type.  You will have to rewrite it for each and every data type you might want to sort.  But, the algorithm is the same for all data types.  The only difference is how to compare whether one value is greater than the other.  So, if you write the sort function to take a pointer to a function that compares the data type, returning an indication of whether one is greater than the other, you only have to write the comparison for each type.  Pass a pointer to that function when you call the sort function and you don't have to rewrite the sort function ever again!

It should be fairly obvious that the compare function will need to accept certain parameters so the function can call it properly.  The sort function might look something like this:

void worlds_best_sort(void *array_to_sort, COMPARE_FN *greater)
{
// some code
if ( greater (&array_to_sort[n], &array_to_sort[n+1]) ....
// some more code
}
Inside the sort function a call is made to the function passed in, "greater", to check if one value is greater than the other.  The "&" operator means "address of" and gives a pointer to the value on the right.  So the greater() function takes two pointers to the values it compares.  Perhaps it returns an int, a 1 if the first value is greater than the second, a 0 if not.  The pointer we pass must point to a function that takes these same parameters if we want it to actually work.  C lets us declare a pointer to a function and describes the function just like any other data type:
int (*compare_fn_p)(void *first, void *second);
That rather confusing line of code declares a pointer to a function named compare_fn_p.  The function pointed to will take two void pointers as parameters and return an int.  The parentheses around the *compare_fn_p tell the compiler we want a pointer to the function.  Without those parentheses the compiler would attach the "*" to the "int" and decide we were declaring a function (not a pointer to a function) that returned a pointer to an int.  Yes, it is very confusing.  It took me years to memorize that syntax.  To be even safer, C allows us to define a new type.  We can define all sorts of types and then declare variables of those new types.  That lets the compiler check our complex declarations for us and we don't have to memorize the often confusing declarations.  If we write this:
typedef int (*COMPARE_FN)( void *, void *);
we tell the compiler to define a type ("typedef") that is a pointer to a function that takes two void pointers as parameters and returns an int.  The name of the new type is COMPARE_FN.  Notice the typedef is just like the declaration above, except for the word "typedef" in front and the name of the new type is where the function pointer name was before.  The name, "COMPARE_FN," is what we used in the parameter list for the sort function above.  The sort function is expecting a pointer to a function that takes two void pointers and returns an int.  The compiler can check that any function pointer we pass points to the correct type of function, IF WE DECLARE THE FUNCTION PROPERLY!  If we write this function:
int compare_int( int * first, int *second);
and pass that to the function, it won't compile because the "function signature" doesn't match.

As you can see, these declarations get messy quick.  We could avoid a lot of these messy declarations by using void pointers: raw pointers to whatever.  DON'T!  ANY pointer, no matter what it points to, can be assigned to a void pointer.  It loses all type information when that is done.  We could declare our sort routine like this:
void worlds_best_sort(void *array_to_sorted, void *);
Now we can pass a pointer to ANY function, and indeed, any THING, as the parameter to the sort routine.  We wouldn't need to mess around with all those clumsy, confusing declarations of function pointer types.  But, the compiler can no longer help us out!  We could pass a pointer to printf() and the compiler would let us!  We could even pass a pointer to an int, which the program would try to run as a function!  But by declaring the type of function that can be used with a typdef declaration, we give the compiler the power to check on what we give it.  The same is true for other pointers, which can be just as disastrous.

I think the lesson here is that the C standard committee went to great lengths to graft some type checking onto a language that had very little, and still maintain compatability.  They also left the programmer with the power to circumvent the type checking when needed, but it is wise to do that only when absolutely necessary.  As I saw in the code review earlier this week, even profession programmers who have been doing it a long time get caught up with types and pointers.  If you aren't very familiar with pointers I urge you to learn about them: they are one of the most powerful aspects of C.  Whether you are new to pointers and/or programming or you have been at it for years, I even more strongly urge you to use the type checking given to us by the creators of C to make your pointers work right.  Pointers, by nature, are dangerous.  Let's make them as safe as we can.

Sunday, August 16, 2015

Let's Go P Code!

For those who have read a lot of my writing it won't be a shock that I'm not an Arduino fan.  I have also made it clear that I don't think C, C++, or the Arduino bastardization of both are the right language(s) for the Arduino target user.  When I preach these things the question is always asked, "what is the alternative?"  My response is usually something like "I don't have a good one.  We need to work on one."  Today I had somewhat of a fledgling idea.  It is far from complete, but it may be a path worthy of following.  Let me explain.

If you are familiar with Java, you probably know that it is a compiled / interpreted language.  That sentence alone will fire up the controversy.  But the simple fact is, Java compiles to a "bytecode" rather than native machine language, and "something" has to translate (interpret, usually) that bytecode to native machine code at some point to do anything useful.  What you may not know is that Java was designed originally to be an embedded language that was an improvement, but somewhat based on, C++.  I'm not fond of Java for several reasons, but it does indeed have some advantages over C and C++.  However, for small embedded systems, like a typical Arduino, it still isn't the right language.  And, though it has some improvements, it still carries many of the problems of C and C++.  It also isn't really all that small as we would desire for a chip running at 20 MHz or less with 32K of program memory.

But for this exercise, let's look at some of the positives of Java.  That interpreted bytecode isn't all that inefficient.  It is much better than, say, the BASIC interpreter on a Picaxe.  And the language is lightyears beyond Picaxe.  (Picaxe and BASIC stamps and the like should result in prison sentences for their dealers!)  Maybe we have a slowdown of 2 to 4 times as typical.  For that, we get the ability to run the (almost) exact same program on a number of processors.  All that is needed is a bytecode interpreter for the new platform.  We also get fast, relatively simple, standardized compilers that can be pretty good and helpful at the same time.  And we can use the same one(s) across the board.  The bytecode interpreter is rather simple and easy to write.  Much simpler than porting the whole compiler to a new processor.  And the way the code runs is standardized whether you are using an 8, 16, 32, or 64 bit processor: an int is always 32 bits.

So Java isn't the right language, but it has some good ideas.  Let's step back in time a bit before Java.  James Gosling, the inventor of Java, has said much of the idea of Java came from the UCSD P system.  Now, unless you have been in the computer field as long as I have or you are a computer history buff, you have probably never heard of the UCSD P system.  To get they story we need to step back even further.  Set your DeLorean Dash for 1960 and step on the gas.

Around 1960, high level languages were pretty new.  Theories used in modern compilers were only starting to be developed.  But some people who were very forward thinking realized that FORTRAN and COBOL, the two biggest languages of the day, were the wrong way to go.  They created a new, structured, programming language called Algol 60.  Most modern languages (C, C++, Java, Objective C, Python, etc.) trace much of their design back to Algol 60.  Although it was never very popular in the USA, it did gain some ground in Europe.  but it was developed early in the game when new ideas were coming about pretty much daily.  It wasn't long before a lot of computer scientists of the day realized it needed to be improved.  So they started to develop what would become Algol 68, the new and improved version.  One of the computer scientists involved wanted to keep it simple like the original.  Niklaus Wirth pushed for a simple but cleaner version of the original Algol 60 (well, there was an earlier Algol 58, but it had and has very little visibility.)  Wirth proposed his Algol-W as a successor to Algol 60, but the Algol committee rejected that and went in line with the thinking of the day: more features are better.  What they created as Algol 68 was a much larger and more complex language that many Algol lovers (including Wirth) found distasteful.  Wirth was a professor of computer science at ETH Zurich and wanted a nice, clean, simple language to use to teach programming without the complexities and dirtiness of the "real world."  He, for whatever reasons, decided not to use any of the Algol languages and created a new language, partly in response.  He called it Pascal (after Blaise Pascal, a mathematician.)

Pascal is a simple and very clean language, although in original form it lacks many features needed to write "real" programs.  But it served well for teaching.  The first compiler was written for the CDC 6000 line of computers.  But soon after a "portable" compiler was created that produced "machine" language for a "virtual" computer that didn't really exist.  Interpreters were written for several different processors that would interpret this "P code" and run these programs.  Aha!  We have a single compiler that creates the same machine language to run on any machine for which we write a P Code interpreter!  But Pascal is a much smaller and simpler language, with much better facilities for beginning programmers than Java.  It was, after all, designed by a computer science professor specifically to teach new programmers how to write programs the "right" way!  Pascal, and especially the P Code compiler, became quite popular for general use.  Features to make it more appropriate for general use were added, often in something of an ad-hoc manner.  Nevertheless, it became quite popular and useful.  Especially with beginning programmers or people teaching beginning programmers.

Step now to University of California at San Diego.  Dr. Kenneth Bowles was running the computer center there and trying to build a good system to teach programming.  He came up with the idea of making a simple system that could run on many different small computers rather than one large computer that everyone shared.  He found the Pascal compiler and it's P Code, and started to work.  He and his staff/students created an entire development system and operating system based on P Code and the Pascal compiler.  It would run on many of the early personal computers and was quite popular.  IBM even offered it as an alternative to PC-DOS (MS-DOS) and CP/M.  I even found a paper on the web from an NSA journal describing how to get started with it.  Imagine writing one program and compiling it, then being able to run that very same program with no changes on your Commodore Pet, your Apple ][, your Altair running CP/M, your IBM PC, or any number of other computers.  Much as Java promises today, but in 64K and 1 or 2 MHz!

Pascal itself has many advantages over C for new programmers.  C (and descendents) have many gotchas that trip up even experienced programmers on a regular basis.  Pascal was designed to protect from many of those.  The language Ada, devised for the US Department of Defense and well regarded when it comes to writing "correct" programs, was heavily based on Pascal.  There are downsides to Pascal, but they are relatively minor and could be rather easily overcome.  Turbo Pascal, popular in the early 80s on IBM PCs and CP/M machines (and one of the main killers of UCSD P System) had many advances that took away most of the problems of Pascal.

So here is the idea.  Write P Code interpreters for some of the small micros available, like Arduino. Modernize and improve the UCSD system, especially the Pascal compiler.  Create a nice development system that allows Pascal to be written for most any small micro.  Many of the problems of C, C++, and Arduino go away almost instantly.  The performance won't be as good as native C produced machine language, but close enough for most purposes.  Certainly much better than other interpreted language options.  Niklaus Wirth went on to create several succussors to Pascal, including the very powerful Modula 2 and Oberon.  Modula 2 was implemented early on with M code, similar to P code.  He even built a machine (Lilith) that ran M Code directly.  If Pascal isn't right, perhaps Modula 2 would be.

In any case, I think this is a path worth investigating.  I plan to do some work in the area.  I would be very interested in hearing what you think about the idea.

Oh!  And what about the title of this blog post?  Well, the P Code part should be fairly obvious at this point.  But the rest of it may not be.  My alma mater, Austin Peay State University, has a football fight slogan of "Let's Go Peay!"  I love that slogan, so I thought it would fit well with this idea.

Let's go P Code!

Saturday, August 8, 2015

Get Out of My Way!

In case you haven't noticed, I am building a PDP-8.  Notice I didn't say an emulator or a simulator.  I consider what I am doing a new implementation of the PDP-8 architecture.  I won't explain that here, but perhaps another post later will explain why I say that.   Today, I want to talk a little about how all our modern technology can make things harder, and what to do about it.

The PDP-8 is a very simple machine.  That was very much intentional.  Although the technology of the day was primitive compared to what we have now, much larger and more complex machines were being built.  If you doubt that, do a bit of research on the CDC 6600, the first supercomputer, that arrived about the same time as the PDP-8.  Much of what we consider bleeding edge tech was pioneered with that machine. With discrete transistors!  But the PDP-8 was different.

The PDP-8 had a front panel with switches and lights that gave the user/programmer direct access to the internals of the CPU.  If you really want to know what your computer is doing, nothing is better.  You could look at and change all the internal registers.  Even those the assembly language programmer normally has no direct control of.  You could single step: not only by instruction as with a modern debugger, but also by cycles within an instruction!  As useful as the front panel was, it wasn't very convenient for many uses.  So a standard interface was a terminal connected to a serial line.  The normal terminal of the day was a mechanical teletype.  A teletype was much like a typewriter that sent keystrokes over the serial port and printed what was received over the serial port.  They were often used as terminals (even on early microcomputers!) well into the 1980s.

The PDP-8 hardware to interface with a teletype was rather simple as was most of its hardware.  The I/O instructions of the machine are quite simple and the hardware was designed to match.  They teletype hardware had a flag (register bit) to indicate if a character had been received, and another to indicate a character was finished sending.  The IOT (input output transfer) instruction could test one of those flags and skip the next instruction if it was set.  By following the IOT instruction with a jump instruction back to the IOT, you could have a loop that continuously tested the flag, waiting on a character, until a character was received.  When the flag was set indicating the arrival of a character, the jump would be skipped over and the following code would read the incoming character.  Simple, neat, and clean.  A simple flip flop and a general test instruction allowed a two instruction loop to wait on the arrival of a character.

We want to build a new PDP-8 using a microcontroller.  The chosen micro (NXP LPC1114 to start) will have a serial port that is fully compatible with the old teletype interface.  Remarkably, that interface was old when the PDP-8 was created, and we still use it basically unchanged today.  The interface hardware for serial ports like that today is called a UART (Universal Asynchronous Receiver Transmitter) or some variation of that.  Lo and behold, it will have that same flag bit to indicate a recieved (or transmitted) character!  Nice and simple.  That won't present any problems for our emulation code.  But we want to do our main development on a PC using all the power ( and multiple large screens!) to make our lives easier. So, since C language is "portable" and the standard language for embedded (microcontroller) work, we write in C on the PC for most of the work, then move to the micro for the final details.  I prefer Linux, whose history goes back almost as far as the PDP-8.  Alas, Linux gets in the way.

You see, Linux, like Unix before it, is intended to be a general purpose, multi-tasking, multi-user operating system.  In one word, that can be summed up as "abstraction."  In a general purpose operating system, you may want to attach any number of things to the computer and call it a terminal.  But you only want to write code one way instead of writing it for every different type of terminal differently.  So you create an abstract class of devices called "terminal" that defines a preset list of features that all the user programs will use to communicate with whatever is connected as a terminal.  Then, the OS (operating system) has "drivers" for each type of device that makes communicating with them the same and convenient for most user programs.  Notice the word "most."

Generally, most user programs aren't (or weren't in the early 1970s) interested in knowing if a key had been pressed or when, only getting the character that key represented. So Unix (and by virtue of being mostly a clone of Unix, Linux) doesn't by default make that information available.  Instead, you try to read from the terminal and if nothing is available your program "blocks."  That means it just waits for a character.  That is great in a multi-tasking, multi-user OS: the OS can switch to another program and run that while it waits.  But it doesn't work for simulating a PDP-8 hardware: we have no way to simply test for a received character and continue.  Our program will stop and control will go to Linux while it waits for a character to be typed.

Now I must digress and mention that Windows does indeed have by default a method to test for a keystroke: kbhit().  The kbhit function in Windows (console mode) is exactly what we need.  It returns a zero if no key has been pressed, and non-zero if one has.  Windows is a multi-tasking OS similar to Linux, so why does it have kbhit() and Linux doesn't?  Not really by design, I assure you, but by default and compatability.  Windows grew from DOS, which was a single-tasking, single-user, personal computer OS.  DOS was designed much inline with how the PDP-8 was designed.  When Windows was added on top of DOS, it had to bring along the baggage.  That chafed the Windows "big OS" designers a lot.

Now one of the things that made Unix (and Linux) so popular was that it would allow the programmer (and user) to do most anything that made sense.  You may have to work at it a bit, but you can do it. The multitude of creators over the last 45 years have made it possible to get at most low-level details of the hardware in one way or another.  I knew I wasn't the first to face this exact problem.  The kbhit function is well used.  So off to the Googles.  And, sure enough, I found a handful of examples that were nearly the same.  So I copied one.

//////////////////////////////////////////////////////////////////////////////
///   @file kbhit.c
///   @brief from Thantos on StackOverflow
///    http://cboard.cprogramming.com/c-programming/63166-kbhit-linux.html
///
///  find out if a key has been pressed without blocking
///
//////////////////////////////////////////////////////////////////////////////

#include <stdio.h>
#include <termios.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/time.h>
#include "kbhit.h"

void console_nonblock(int dir)
{
   static struct termios oldt, newt;
   static int old_saved = 0;

   if (dir == 1)
   {
      tcgetattr( STDIN_FILENO, &oldt);
      newt = oldt;
      newt.c_lflag &= ~( ICANON | ECHO);
      tcsetattr( STDIN_FILENO, TCSANOW, &newt);
      old_saved = 1;
   }
   else
   {
      if(old_saved == 1)
      {
         tcsetattr( STDIN_FILENO, TCSANOW, &oldt);
         old_saved = 0;
      }
   }
}

int kbhit(void)
{
   struct timeval tv;
   fd_set rdfs;

   tv.tv_sec = 0;
   tv.tv_usec = 0;

   FD_ZERO(&rdfs);
   FD_SET(STDIN_FILENO, &rdfs);

   select(STDIN_FILENO+1, &rdfs, NULL, NULL, &tv);
   return FD_ISSET(STDIN_FILENO, &rdfs);
}

int test_kbhit(void)
{
   int ch;
   console_nonblock(1);  // select kbhit mode
   while( !kbhit())
   {
      putchar('.');
   }
   ch=getchar();
   printf("\nGot %c\n", ch);
   console_nonblock(0);
   return 0;
}

That's an awful lot of code to expose a flip-flop that is already there.  But such is the nature of abstraction.  Having a powerful OS in charge of such low-level matters is the right way to go.  But all too often it makes doing simple things difficult, if not impossible.  I'm glad that the creators of Unix and Linux gave me the option to get the OS out of the way

Saturday, July 25, 2015

How to Build a 1965 Computer in 2015

In my last post I introduced my latest project, a replica of the PDP-8e.  I have been quite busy doing research and writing code.  The code is the easy part!  The hardest part is making it look authentic.  But, thankfully, I am not the only crazy person out there that wants to build a fifty year old computer.  There seem to be quite a few of us.  A few stand out. Oscar at Obsolescence Guaranteed has done much of what I'm working on.  His approach is a bit different.  First, he has re-created an 8i instead of an 8e.  Second, he was smart and used an emulator already written and hardware already built to make much of his reproduction.  He has done a great project and created a kit.  I highly recommend you check it out.
picture from http://vandermark.ch/pdp8/index.php?n=Hardware.Processor

Oscar has also done a lot of research that will help me.  Fonts and colors for the front panel, for instance.  Have you ever thought about how to go about reproducing a replica of a 50 year old piece of equipment and have all the colors and letters look just right?

There is also Spare Time Gizmos with the SBC 6120.  That project/product uses The Harris 6120 PDP-8 microprocessor as the CPU for a sort-of genuine PDP-8.  Alas, the 6120 and its predecessor the 6100 are rarer than gold-plated hens' teeth.  But it's a really cool project with lots of good information and inspiration, much like PiDP-8 from Oscar.

Our approach will be a bit different.  The front panel will be as authentic as I can reasonably make it, in looks and operation.  So will the CPU, with a bit of a twist.  I don't really want to build a few boards of TTL logic to create the CPU.  The 6100 and 6120 are really hard to find and expensive when you do find one.  Replacements and spares are just as rare and expensive. An FPGA would be good, but my verilog and VHDL skills as well as fine pitch SMD soldering skills aren't up to it yet.  Not to mention I want to make this easily reproducible for others. So I have decided to take a 21st century 32 bit microcontroller with flash and RAM onboard and nice input output devices and create a new PDP-8 on a chip.

The PDP-8 instruction set is really simple.  There are eight basic instructions.  Two of those, Operate and IO Transfer can actually have several "microprogrammed" instructions to do various operations.  But overall, it's quite simple.  To emulate the instructions, at least two approaches can be used.  First, you can write code that simply ends up with the same result as each instruction.  That's fine for most uses and tends to be more efficient.  Let the new processor work the way it is intended and get the results you need.  As long as the final state at the end of each instruction is exactly what it would be with the real computer, everything is fine.  No matter how you got there.  The other way, though, is more suitable to our purpose.

What we are going to do is essentially emulate the same hardware as used in the PDP-8 in our program.  It will, of course, end up with the same results.  But how we get there matters. The PDP-8 cpu has three "major states" with four clock cycles in each major state.  Our code will follow that, doing the same processing in each state and cycle as the real hardware does.  It won't be as efficient, but since we are running a very fast 32 bit modern processor to emulate a 12 bit system from 50 years ago, it should be plenty fast.    This begs the question, "why?"

The answer simple: we want to build an entire PDP-8, not just a program that emulates one.  The real PDP-8 can be stepped through individual cycles and the front panel show what is inside the CPU at each step.  We can only do that if we follow the same steps.  Plus, if we decide to add an external bus to add peripherals (like the Omnibus of the real PDP-8e) we will need those individual cycles to make the real hardware respond properly.

The first step is to write the emulator program and put it into a microcontroller.  I have started with a NXP LPC1114.  I have them and I'm familiar with them and they are plenty powerful enough to make this work.  They come in a 28 pin DIP package for easy prototyping.  The biggest downfall is the DIP version only has 4K Bytes of RAM, so we won't be able to emulate even a full 4K word PDP-8.  I think I can fit 2K words in it, though.  And that should be fun to play with.  Eventually I will use a somewhat larger chip that can hold the entire 4K words or maybe even the full blown 32K words.  The plan is to create a chip that has a fixed program inside in the flash that emulates a real PDP-8 as accurately as possible.  I will add the front panel hardware and any other IO devices to that.  Others can take the same chip, much as they might with a (hard to get) 6100 or 6120 chip, and use it as the CPU for their own PDP-8!

Currently, the CPU emulation is about 90% coded.  Once the basic coding is complete, testing for real will begin.  I've run a couple simple programs on it so far and the results are encouraging.  There is still a lot of work to do.  Mostly IO stuff.  But I hope to soon be able to post the initial version of a 21st century PDP-8 cpu chip.  Get your soldering irons warm and take that paper tape reader out of storage!


Thursday, July 16, 2015

A Mini Project -- BDK-8e

Today we take personal computers for granted.  They are everywhere.  But of course that wasn't always true. In the 1950s a "small" computer would fill a room and typically take as much power as a household (or more!)    A few lucky people got to work on small machines, like the IBM 650 where they had free and total access to the machine for a time. But mostly the machines were too big and expensive to allow one person total access.  They were typically locked away and the only access was by passing a deck of program cards to an operator with your new program to run and getting a printout back sometime later.  If there was an error in your program you had to start over.

Then the transistor got common and (relatively) cheap, and like skirts at the same time, computers started getting smaller. Digital Equipment Corporation (DEC) was founded in 1957 with the intention of making computers. The founders had just come from MIT and Lincoln Labs where they had played important roles in developing some early small computers.  But the venture capitalists wouldn't let them build computers right away, insisting they build logic modules like they had used in the earlier computers.  It was perceived that the market was too small, but the logic modules were in demand.

After two years they finally built their first computer, the PDP-1, using those same logic modules.  Indeed, the logic modules sold well and were used in several DEC computers.  The name PDP comes from Programmed Data Processor; again, the venture capitalists thought the term computer was too menacing.

The PDP-1 was well received.  It was much more affordable than other computers at only $120,000.  Most others cost $1,000,000 or more.  But more importantly it was "friendly."  A user could sit down at it and have it to himself (women were rare.)  It had some interesting devices attached, like a video monitor, that could be used for nifty stuff.  Perhaps the first video game, Spacewar, was created on the PDP-1.

With this first success, DEC moved forward with a series of new machines.  Inspired by their own success and some other interesting machines, especially the CDC-160 and the Linc, they decided to build  a really small and inexpensive machine: the PDP-8. The PDP-8 was a 12 bit computer, where all their previous machines except the PDP-5 were 18 bits.  It would be another four or five years before making the word size a power of 2 (8, 16, 32 bits) would become common.  Computer hardware was expensive and a minimum number of bits would cut the cost dramatically.  It also created limitations.  But the PDP-8 was the right machine at the right time, probably from the right company.  It sold for a mere $18,000.  Up to then a production run of 100 machines was considered a success.  The PDP-8 would go on to sell around 300,000 over the next fifteen to twenty five years.

The original PDP-8 led to successors.  The PDP-8L (lower cost), the PDP-8S (serial), the PDP-8I (integrated circuits), and then the PDP-8e.  There were more models after that, but it seems the PDP-8e was the defining machine.  It was considerably higher performance than its predecessors and much smaller.  It was offered in a tabletop cabinet that would fit on a researcher's lab table.  Many were used for just such purposes.  It was more reliable as well, being made from newer Medium Scale Integration (MSI) integrated circuits that fit many more functions on a chip.  Reliability is directly linked to the number of parts and pins used, and the 8e made both much smaller.  There were very few new features added to later models.  The 8e was in some ways the epitome of the 8 series.  To some degree it defined the minicomputer class.

As I mentioned, the 8 was a 12 bit computer.  That sounds odd today, but wasn't then.  Twelve bit, 18 bit, and 36 bit computers were rather common, and there was even a 9 bit machine that was one of the inspirations for the PDP-8.  Twelve bits allowed it to directly address 4 Kilowords of RAM ( 4096 12 bit words, or 6 Kilobytes.)  That was the normal amount sold with the PDP-8.  It was magnetic core memory and horrendously expensive.  With a memory controller it could be expanded to 32 KWords.  The processor recognized eight basic instructions: AND, TAD (two's complement add),  DCA (deposit and clear accumulator),  ISZ (increment and skip on zero), JMP (jump to new address), JSR (jump to a subroutine and save the return address),  IOT (input output transfer), and OPR (operate).  The last one is really interesting and is actually a whole family of instructions that didn't access memory directly.  OPR was "microprogrammed" meaning that several bits in the instruction had specific meanings and caused different operations to happen.  They could be combined to effectively execute several operations with one instruction.

So the PDP-8 was a very interesting, if minimal, machine.  And it was very popular and very important in the history of computers.  Many individuals collect them and keep them.  Alas, I do not have one.  Some people have re-created them either from original schematics or by using FPGAs or the 6100 or 6120 single-chip PDP-8 microprocessors.  Even those chips are hard to find now.  And that leads me to my new project.

I am creating a new PDP-8 implementation.  The heart of it will be a small microcontroller (ARM) that is programmed to emulate the PDP-8 as accurately as I can make it.  That means with the same input and output system and devices, as well as instruction set.  I intend it to run at the same performance level normally.  I hope to create an Omnibus (the PDP-8e bus) interface that will allow connecting actual PDP-8 peripherals.  I'm not sure exactly what all features it will include, but I plan to make it as complete as reasonably possible.  I will make all the code and schematics available in case anyone else wants to build one.

So, go and Google the PDP-8 to see what a wide array of stuff is on the web about this machine.  Then come back and follow along my journey creating this new relic.  It should be fun...

My PDP-8 project page.

Friday, July 3, 2015

Backups

By now I am sure we all have heard how important backups of our important data are.  I certainly know this lesson well.  But still, I got bit.  Being a "computer professional" I know better.  My daughter told me a few days ago that her laptop quit working.  That laptop and I have a long history.  I have patched it together several times.  But this time it was a different problem.  It wouldn't boot.

Ugh.  My heart sank.  I had meaning to back it up, but just never had.  All her papers and, most importantly, pictures and videos are on that hard drive.  I tried a few of my best ideas, like booting Linux from a CD and trying to read the drive.  Even took the drive out and put it into my main Linux machine and tried copying using "dd" but to to avail.  Horrendous read errors all over the disk.  That's actually the good news.

Why is that the good news?  Because since it reads sometimes but has read errors all over the place, that is a strong indication that it is NOT the data corrupted.  Instead, it seems more like an electronics / controller board problem.  If it were the data on the drive it would likely be more consistent.  I have, a long, long time ago, fixed drives with similar problems by replacing the controller board with an identical one from another drive.  I don't have one like hers easily available, but there is a service that advertises exactly that sort of thing online: Outsource Data Recovery in Cleveland Ohio.

Before I go any further let me tell you what I did wrong.  Trying to access the drive myself could have caused further damage.  If you ever find yourself in a similar situation, shut off the computer and remove the drive and don't do anything with it until you consult a professional.  Every bit of use after the first problem decreases the chances of successful data recovery.

OK, on with the story.  Outsource advertises "$60 hard drive repairs."  If the problem is as simple as mine appears to be, they advertise they can repair the electronics for essentially a $60 flat fee.  If the problem is more in depth they will let you know and let you decide what to do.  I looked around online for more information and reviews of their service.  There are not a lot.  I found about 8 good reviews, pretty much all giving five stars, and one bad one.  The one bad one did sound a bit flaky, like someone who didn't really know what they were talking about.  Over half the good ones were on either the company's own web site or their Google plus page.  So the details are a bit sketchy.  The Better Business Bureau in Cleveland Ohio has them listed, and has no complaints lodged against them.  They are not a member of the BBB.

So, I have decided to give them a try.  I will file a ticket with them and send the drive to them.  I am hopeful.  I think there is a good chance of success.  I will report back with the results.

The lesson here is: don't put off your backups.  Find a good backup strategy and stick to it.  Back up your phone or tablet too.  These days we carry our lives around in our pocket and it only takes a second to lose everything.  I bought a new 4 terabyte external drive to do my backups on.  I'm going now to continue backing up all my important data.  Maybe you should too.

Sunday, June 28, 2015

The Case for Space

Today was a disappointing day when a SpaceX Falcon 9 rocket exploded a little over two minutes after liftoff.  It was a lot of work, a lot of money, and a lot of supplies for the space station that were lost.  But what else was lost?  Of course, spectacular failures like that bring out the enemies of space exploration. Like vultures waiting on something to die, they swoop in with their simple-minded ideas of why we shouldn't explore space.  Typically, as was the case with the one I saw today, they say the money should be used to "cure cancer" or "feed hungry people" or otherwise save human lives.

That is an ignorant and simple-minded point of view.  The space program has brought countless benefits that have improved and saved lives, and continue to do so every day.  Weather satellites have dramatically increased the accuracy of weather forecasting and can give early warning of large storms so that people can be evacuated.  GPS and communication satellites have improved individual safety and made it possible to locate someone who is lost or injured, or allow someone immediate communication that may save their or someone else's life.  Air, sea, and land navigation have become much safer thanks to communication and weather satellites.  Experiments that could only be conducted in space have led to research that continues to help medical science.  And all this with extremely expensive and limited availability to space.  Currently, SpaceX and other private companies are trying to commercialize space and make it much cheaper and easily available.  It isn't easy.  Or cheap.

But I'm not here to talk about the benefits we have already received from space exploration.  I want to talk about the one big benefit we will all gain.  Survival of the human race.  And we better get on it.

Currently, the Earth is holding about 7.2 Billion people.  That number is growing exponentially.  How many can it support?  We are already feeling the strain.  The UN is recommending we start eating insects because we need the land used to raise animals to eat to grow crops and build housing.  Technology can only go so far.  How many people can we feed and house?  What will life be like when the planet gets so crowded that everyone is crammed together?

On the Wikipedia page on world population there is a nice graph of UN projections for population growth.



There are three possible projections: low, medium, and high.  With the low projection the population is expected to actually decrease beginning around 2050.  The medium projection shows it starting to flatten out around the same time. The high projection shows around ten billion people on the Earth by 2050.  Most other estimates I have seen tend toward the higher number. Unless the numbers do actually start to decline, we will at some point reach the level that the Earth can't sustain the population.  That would mean hard times for everyone.  Just like a growing family in a small "starter" house, we are going to need more room. And if we were to somehow cure cancer, and aids, and end murder and war, what would that do the population?

What about natural disasters?  In recent years large meteor impacts and close encounters have been in the news.  There have been large impacts in the past.  It is believed a large impact caused the extinction of the dinosaurs millions of years ago.  Other large rocks from space have had devastating effects on the Earth.  It is just a matter of time before it happens again.  Will our society survive?  Would the human race survive?  I say the answer is a definite no on the first, and a likely no on the second.  We are keeping our eggs all in one basket.  We need to spread out.

Some might say we should shield ourselves.  Again, there is only so much we can do.  There is no way to completely protect ourselves and our planet from a large body crashing into it.  And, whatever we can do would require a good space program!  The only reason we know about a lot of the possibly dangerous rocks out there is because our small space exploration do date has made it possible.

So, it comes down to this.  Even if you ignore all the proven benefits space exploration has already given us.  Even if you deny that it will likely provide us many more as it gets more common and cheaper.  There is still one great reason to invest more in space exploration.  Our survival as a race depends on it.  Overcrowding and a near certain major impact some time in the future will not allow us humans to continue living here as we do now.  We need to find more places to go. Kids grow up and move out of their parents' homes.  It's time for us to find a place of our own.


Thursday, June 25, 2015

Trailers

In a previous post I mentioned that I was going to try my hand at writing some books.  I've been working at that lately and I have a couple of early sneak previews.  There isn't a lot there yet, but you can consider them something like a trailer for a movie.

There are two books: one on electronics and one on embedded systems.  The one on electronics is better thought out and more coherent.  Here are the reasons for posting them as they stand.

1.  You get a sneak preview of what is coming and you may even find something useful.
2.  It gives me a way to track my progress.
3.  You can provide feedback and help guide the development.
4.  Helps keep people interested while they wait for the complete books.

Here is what I ask of you.  If you are interested, take a look at what is there.  Provide feedback to me on what you like or don't like.  If you think it is crap, tell me that.  If you see things that could be improved, let me know that.  Keep in mind I'm not asking for proofreading -- there is a lot of editing yet to be done.  What I am really interested in is how you like the overall feel of the books.  Do they seem like they will be helpful?  Do they explain things well?  Too much hot air?

The books are located in the "projects" section of my web site, here:
projects
The files are in PDF format.  I look forward to hearing what you have to say.

Sunday, June 21, 2015

Travel to the Beat of a Different Drum

Lately, I have run into some professional programmers that didn't even know what a kilobyte was.  They are so used to dealing in megabytes and gigabytes that the idea of measuring memory in the thousands of bytes was foreign.  With modern computer main memories measured in gigabytes and storage measured in terabytes we are spoiled.  In the 1950s kilobytes were precious. And expensive.  The first ten or fifteen years of computing was filled with new technologies to use as memory.  In the past I have written about the vacuum tube registers in ENIAC and the mercury delay lines and CRT memories of later computers.

Another technology that found fairly widespread use in those days was the magnetic drum.  Compared to delay lines and CRTs the magnetic drum memory was large.  Typical sizes were 8 to 64 kilobytes early on and expanded later.  Drums were used as main memory (RAM) and as storage, like a modern hard disk drive.  They were manufactured into the 1970s and used until the early 1980s.  They fell out of favor in the late 1950s and early 1960s as RAM when magnetic core memory became available.  They were eventually replaced for storage by the hard disk drive.

A magnetic drum is similar in concept and operation to a modern hard disk drive (HDD.)  But the HDD consists of flat disks that spin like an old vinyl record and store data on the flat surface.  It normally has a single read/write head for each storage surface and the head has to move back and forth across the surface to read different tracks of data.  In contrast, the magnetic drum was a single cylinder that stored data on the outside surface.  It spun around the central axis and typically had a read/write head for each data track.  That did away with the time needed to position the head over the needed track, but it still required time waiting on the proper data to spin around to the head.  To help overcome the rotational latency waiting on the proper data to spin around, it was common to place the next instruction of a program at the data location on the drum that would be just coming available as the previous instruction completed.  Some drum machines had the same number of read/write heads as the computer had bits in its instructions.  With that arrangement, an entire word of memory could be read or written at once.

As an example, the drum used in the IBM 650 was 16 inches long and had 40 tracks.  It could store 10 kilobytes. It spun at 12,500 RPM meaning that if you just missed the data you wanted you would have to wait 80 microseconds for that data to come back around.  In these days of multi-gigahertz computers one microsecond delay is an eternity.  But such were the rules of the game in those days.

Magnetic drum memory
Magnetic drums from (left) UNIVAC computer and (right) IBM 650.
Photo from royal.pingdom.com

Magnetic core memory became available in the 1950s and gave the drum a short life as main memory.  The cheaper, faster, smaller, and larger capacity of core quickly replaced the drum in most computers.  As storage memory the drum lived a while longer.  But the invention of the hard disk drive with similar improvements eventually killed the drum in that role as well. Core memory would rule for about 20 years and soon we will take a look at it.

Saturday, June 20, 2015

Riding the Rough C

C is probably the closest thing there is to a "standard" programming language.  You would be hard pressed to find any modern computing system that doesn't have C available.  Although it has somewhat fallen out of favor for large computing systems, like PCs (don't tell the Linux kernel developers that!) it is almost universally used for embedded systems.  In the embedded world it is the runaway winner, although C++ is gaining ground.

But C (and C++) has a downside.  I will write about C here, but keep in mind that pretty much everything I say applies to C++ as well.  One of the characteristics of C that made it so popular is flexibility: one is allowed to do just about anything in any way they please.  But that flexibility is the achilles heel.  I am going to show a few examples to make the point.  If you haven't been bitten hard by at least one of these, you can't call yourself a C programmer.

So, you know I'm going to show some examples of things that can gotcha in C.  These will be isolated code fragments that you know have a problem.  Even considering that, notice how difficult it is to find them all.  Then consider you have one of these bugs somewhere in your thousand or so lines of code, but you don't know what or where.  Consider how hard that will be to find.  C gives you enough rope to shoot yourself in the foot.

Let's start simple:

if( my_array[n]=1)
   printf("true\n");
else
   printf("false\n");

What does that do?  Hint: there is only one answer, with two parts, and it probably isn't what you think.  First, it will set the array element "my_array[n]" to 1, even if that element doesn't exist,  and then it will print "true."

It's likely you would have said something about "it depends on what is in my_array[n]."  But it doesn't.  The programmer probably meant to write "if( my_array[n] == 1)."  The operator "==" is the equality comparison operator, and the operator "=" is the assignment operator.  The "if" statement looks at the value of the expression within the parentheses to decide which branch to take.  By writing "my_array[n] == 1" it would compare the value stored in my_array[n] to 1 and if they are equal execute the "true" part otherwise execute the "false" part.  But writing "my_array[n] = 1" assigns the value 1 to the array element and an assignment returns the value assigned as the value of the expression. So, since 1 is always assigned, 1 will always be the value inside the parentheses, which is considered true, and the "true" branch will always run.  Plus, you just put a 1 into your array.  This is perfectly legal C and you might sometime want to do an assignment that way, but not likely in an "if" statement like that.  But a standard C compiler must compile it as is, and most won't even give a warning.  This one gets even experienced, professional programmers all the time.

Remember the other part of the answer?  "Even if it doesn't exist?"  What if you declare the array like this:
int my_array[20];
and when the "if" statement above runs n is equal to 20?  The declaration "my_array[20]" means to allocate an array of 20 elements numbered 0 to 19.  There is no my_array[20].  It doesn't exist.  But the C compiler will happily produce code that writes the 1 to element 20.  Since there is not an element 20, it will write over whatever is stored there!  Chances are that some other variable is allocated that chunk of memory and it will get changed without you knowing it.  That is a really nasty bug to find.

Here is another that is quite common and hard to find.  It shows up in a lot of different forms and not always easy to even notice there is a problem.

int x = 10;
while( x>0);
{
   printf(" x = %d, x^2 = %d\n", x, x*x);
   --x;
}

What will it print? Nothing.  The program will stop when it gets to the "while" statement.  It goes into an endless loop, never getting to the next statement.  In C, the while statement is defined to be like this:
 "while  expression statement"
A statement is terminated with a semicolon.  Usually a statement does something, and a statement can be a compound statement, which is a series of statements inside curly braces ( "{  }" .)  C also allows the "null statement" which is nothing terminated with a semicolon.  Look at the while statement in the example above.  The condition is "x>0" and the statement is the null statement ";".  After that is a compound statement ( " { printf ....} ").  That means the "null statement" (";") finishes off the while statement so no code gets executed as part of the "while" loop.  The variable x never changes.  It just keeps looping.  Since the code below it (" { printf....} ") is a perfectly legal compound statement, the compiler will merrily compile it without any warnings.  But the program will effectively halt at the "while" statement.

Here is another variation of that one:

int x = 10;
while( x> 0)
   printf( "x = %d\n", x);
  --x;

What does this print?  An infinite series of "x=10" lines.  Look again at the definition of the "while" statement.  The "while expression" is followed by a single statement.  Often, that will be a compound statement, but it can be a "normal" statement.  In this example, the "printf" statement gets executed as the loop body.  But since it is not a compound statement enclosed in curly braces, the next line ("--x") is not part of the loop body.  It's all perfectly legal C and the compiler won't even warn you.

As I mentioned at the start, C is available everywhere.  It is standardized.  You would think this means it works the same everywhere, but that isn't true.  A common development method, and one I recommend, is to develop code on a PC with C that will eventually run on a microcontroller.  The PC has much greater resources for development and debugging and makes it much easier to write much of the code.  But you have to watch out for differing behaviors.  Take a look:

int x = 0;
while ( x < 40000)
{
   // do something useful here
   ++x;
}

If you run that on a PC, say with Visual Studio or gcc, it will work just fine and "do something" 40000 times.  But when you transfer that code to a small microcontroller, say an Arduino, it will hang at the "while" statement in an endless loop.

The C standard defines an "int" to be the "natural" size of integers for a given machine.  On a PC, which is a 32 or 64 bit machine, an "int" will be either 32 or 64 bits and have a range of more than plus or minus 2 billion.  But on an 8 or 16 bit microcontroller an "int" will be 16 bits.  A 16 bit integer has a range of -32768 to +32767.  Because of the way computer hardware works, as the variable "x" is counting up when it reaches 32767 and adds one, it "rolls over" from binary 0111111111111111 (32767) to binary 1000000000000000 (-32768),  and start counting up again.  But it can never reach 40,000.  Your compiler may or may not give an error or a warning with this one, but there are plenty of variations that certainly will not.

C is a great language.  It's very versatile and very popular.  You can run it on just about anything and write any kind of code you want.  But it is full of traps.  I have only scratched the surface.  It would be easy to fill a large book with examples.  If you plan to use C (or C++), put in the effort to learn enough of the language to at least be able to recognize these pitfalls when you encounter them.  There are plenty, and no list of examples could ever be complete.  It will be up to you to watch for them and find them when, not if, they happen to you.

Wednesday, June 17, 2015

Hitting the High Notes

In college, I was constantly told by my professors how important it was to keep a lab notebook.  They were right.  Anyone who does technical things, either professionally or as a hobby, needs to keep a good notebook.  As you progress through your hobby or career, the value of your notebooks increases.  The little nuggets of hard-won knowledge are priceless.  You can refer back to old notes and find solutions to current problems.  You can look back at what you tried before and find out if it did or didn't work.  You can track your progression over time.  There are lots of other benefits, too.

Now imagine you suddenly got access to the notebooks of some very experienced engineers.  You can imagine they would be full of useful information.  Those notebooks certainly wouldn't  relieve you of the obligation of taking your own notes, but they would be very valuable.  Well, now you can.

Today, while visiting one of my favorite websites, embedded.com, I came across this article which is more or less a review of a new, free, e-book from Texas Instruments.  I want to thank David Ashton for writing that review and I highly recommend you read it. I won't repeat his work: he's done a much better job of reviewing it than I could.

Analog Engineer's Pocket Reference e-book

Embedded.com is geared primarily toward working engineers (although hobbyists should still read it!)  Since a lot of what I write is aimed at hobbyists, I want to take a little different path.  I want to tell you why should read the book and how you should use it.

Since this book is essentially a collection of notes from working engineers, it assumes you know what all the terms mean.  For example, there is a section on modeling capacitors.  It briefly mentions equivalent series resistance (ESR) and equivalent series inductance (ESL), but doesn't really explain what they are.  A typical hobbyist probably won't really know what those terms mean.  But that stuff is very important when it comes to how to actually use capacitors in a circuit and why.  So here is what I recommend. Find a section of the book that is either relevant to what you are doing or that seems interesting.  Read briefly through it and Google the terms you don't understand.  Use the full names when possible (equivalent series resistance instead of esr) since you will get better results.  Read through the explanation: Wikipedia often has good explanations.  Then come back and read that section of the book again with your new knowledge.  You will likely find a lot of mysterious things, like why certain types of capacitors are needed for different uses, will suddenly be much clearer.

As I mentioned above, don't just look up things as needed.  Skim through the book looking for things that seem interesting or useful.  Read those and Google what you don't know.  Taking in the new knowledge in the context of how and why it is used will help to make the concepts much clearer than if you were just reading a textbook that explains them out of context.

I think this ebook is a great resource for hobbyists (and engineers!)  Texas Instruments is huge and they produce a LOT of good documents.  Not to mention useful parts.  So download the book now.  You will have to register with TI, but that isn't a bad thing.  They might notify you of new, interesting parts and documents.  Keep the book easily available on your computer and check it out often.  Treat it as a virtual mentor, always available to ask questions.


Saturday, June 13, 2015

Living Trees and Dead Dinosaurs

I enjoy writing sometimes, when I think I have something useful or interesting to say.  Perhaps that's obvious by the existence of this blog.  I will leave it to your judgement whether anything I write is useful or interesting.  I also enjoy teaching others the little bit that I know.  I think that learning is a wonderful and pleasureful thing and I like helping others experience that.  I also like to tell myself that I know a little bit about electronics and building and programming embedded systems.  

Since the "maker movement" has caused a resurgence of interest in electronics and with it a new interest in embedded systems (e.g. Arduino, Raspberry Pi, Beaglebones in assorted colors, etc.) I think I may be able to add something.  I've decided to write some books.  

If you know me, you know that very often I start on a project and it never gets finished.  There are a lot of reasons for that, but essentially it comes down to limited time.  As a project goes on, I find new things that get my attention.  So how am I going to keep interested long enough to finish?  That's where you come in.  If people find it useful and encourage me to continue, that will keep me interested.  To make that happen, they will need to see the work in progress.

So, here is the plan.  I have two books planned: one on electronics, and one on embedded systems.  I have actually already written a good amount of the embedded systems book.  The electronics book is just in outline form.  As I complete a chapter I will post about it here, with perhaps a highlight or two.  The actual book, as it develops, will be available as a download on my website.  I hope to get feedback so people can tell me what they like or don't like about the book(s) as they are written.  I can make changes I see fit and get encouraged to continue from that feedback.  That's the plan.  As I said, the results will depend on you and anyone else you find that may be interested.  Feedback from readers will be essential.

Are you or someone you know interested in such a thing?  Would you be interested in reading what I have to say about these subjects?  If so, please leave a comment so I can gauge the interest up front.  And point others that you think may be interested to this post.

Maybe you are wondering about the title of this post.  I don't intend to publish these books in the traditional paper format.  For now the intent is only to have them in pdf form online.  So, no dead trees will be involved: they can continue to live.  But some dead dinosaurs will have to give up their hydrocarbons to power your computer while you read.  Sorry, but that's how my twisted mind works.

Remember to leave your feedback comments!  Thanks.

Friday, June 5, 2015

Wherever I End Up

The automobile.  How has it affected your life?  Can you imagine what our lives would be like without it?  The subject of books and movies and songs.  The sketches and dreams of teenage boys.  Do you remember getting behind the wheel the first time?  Remember what it felt like the first time you drove alone?  The freedom and excitement of finally having your driver's license?  For roughly about a hundred years the automobile has been a fixture of daily life in most of the world.  But the times they are a changin'.

1919 Ford Model T Highboy Coupe.jpg
"1919 Ford Model T Highboy Coupe". Licensed under CC BY-SA 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:1919_Ford_Model_T_Highboy_Coupe.jpg#/media/File:1919_Ford_Model_T_Highboy_Coupe.jpg

If you compare the 1919 model T in the picture above to most any common car of the late 1970s, there aren't many major differences.  Both would most likely have an internal combustion engine burning gasoline, connected  to the wheels through a gearbox and mechanical linkage.  Of course the newer model would have more refined technology and some creature comforts like air conditioning and power steering and brakes, but the technology is nearly identical. When you push the accelerator pedal on either one a mechanical linkage would open a valve to allow more air and fuel into the engine. The brake pedal would similarly have a mechanical or hydraulic linkage to the brakes on the wheels.  Pretty simple and straightforward. For about sixty years the automobile didn't change much from a technological standpoint.

Much more refined, but technologically nearly the same.

"74 Corvette Stingray-red" by Barnstarbob at English Wikipedia. Licensed under CC BY-SA 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:74_Corvette_Stingray-red.jpg#/media/File:74_Corvette_Stingray-red.jpg


But what is in your garage now?  Much of the technology is probably the same.  It probably still has an internal combustion engine.  But a lot is new and the new is taking over quickly. If you are still making payments on your car, it probably has more computers in it than existed in the world in 1953 when the Corvette was introduced. Those computers are leading a revolution in automobile engineering, as we shall see.


"An ECM from a 1996 Chevrolet Beretta- 2013-10-24 23-13" by User:Mgiardina09 - Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:An_ECM_from_a_1996_Chevrolet_Beretta-_2013-10-24_23-13.jpg#/media/File:An_ECM_from_a_1996_Chevrolet_Beretta-_2013-10-24_23-13.jpg


Computers in cars started off in the engine control unit (ECU).  Engine parameters like timing and fuel-air ration need to be changed for best efficiency, performance, and pollution as the operating environment change.  The ECU replaces a lot of complex mechanical contraptions that still were less than optimal.  Now it is quite likely that your accelerator pedal is not connected directly to the fuel supply.  Rather, it is probably an input to the ECU.  The ECU considers your placement of the accelerator, and even the rate of change of placement, as a request.  It considers your request in conjunction with all the other parameters it monitors to determine how much fuel to send to the engine and when.  It probably does a lot more, too.  And as time goes by it takes over more and more of your driving.

What about the brakes?  They probably still have the same hydraulic system they had in 1979.  But it has been augmented with a computer too.  Do you have anti-lock brakes? In an anti-lock brake system a computer monitors the brake pedal and wheel rotation when you are braking.  If it detects that a wheel is stopping before it should, indicating an impending skid, it will release pressure on the brake of that wheel.  To do that requires that the computer have some way of controlling the pressure on the brake for that wheel.  Modern systems have evolved to do more than prevent skids.  They now enhance stability and traction as well.  So again, your pressure on the brake pedal becomes something of a request to the braking system.  The computer now has control of the brake at each wheel and the messy, heavy, and space-consuming hydraulic lines become almost redundant.  In the future we will likely see more electronic brakes, where the wheel cylinder is actuated electrically by the computer, doing away with much or all of the hydraulic system.

What about safety systems?  Many new cars have rear-looking cameras in addition to the standard mirrors so that the driver can see what is behind the car, especially when backing up.  In addition, obstacle sensors are becoming more common to warn of an impending collision with something the driver may not be able to see.  Since all these systems are computerized, and the various computers in the car are linked by a network, it is possible to control the brakes and engine to stop the vehicle without the driver having to take action.  Systems have also been developed to detect when the vehicle is about to leave the lane or road it is traveling in and warn the driver to take action.

And then there are communication and navigation aids.  Many new cars have some sort of cellular or Internet connection.  These are sometimes used to call for help automatically in the event of an accident, or manually request assistance for various troubles.  Navigation aids allow the driver to enter a destination and get turn-by-turn directions to get there, taking current traffic and obstacles into account.  One possible future development that I think has a lot of merit is taken from modern aircraft.  In many modern aircraft, especially military, there is either a Head Up Display (HUD) or Helmet Mounted Display(HMD), or both.  These systems provide a transparent overlay in front of the operator's field of view that provides real-time data overlaid on what the operator actually sees. That data can be the typical data displayed on the dashboard.  My wife's 1995 Pontiac Grand Prix had a simple Head Up Display that projected most of the dash information onto the windshield, allowing the driver to get that information without having to look away from the road.  But how about if instead of your navigation system speaking to you to give directions, it instead overlays a line on the screen showing the exact route you need to take as you drive?  Current military systems are large and expensive, but automotive requirements are more lax and easier to meet at low cost.  No helmet is required: Google Glass or an Oculus Rift would probably provide a working solution combined with a navigation system.  Mass production would bring the cost way down, making it available to almost all car owners.  I hope we don't need weapons integration, which helps keep the cost down as well.

Night vision scene with color symbology
A night vision scene through a modern Helmet Mounted Display
http://jhmcsii.com/features-and-benefits/

The common thread through all of this has been that technology is improving safety, performance, and comfort by taking more responsibility away from the driver.  But so far, the system has been limited to assisting a single driver of a single car.  What happens when the cars start communicating between themselves and with the roadways?  We already have the Google self driving car which works mostly alone, driving on the road with human drivers.  According to Wikipedia, cars are the largest cause of injury related deaths in the world.  Most accidents are caused by poor human judgement.  With a computer in charge, the human factor is removed.  If all the cars are networked together and with the roadway and traffic signals there can no longer be poor human judgement involved.  Although it is unlikely that accidents would be completely eliminated, they would become much less common.  The reduction in loss of human life or injury and of property damage would be dramatic.


Driverless Car
"Jurvetson Google driverless car trimmed" by Flckr user jurvetson (Steve Jurvetson). Trimmed and retouched with PS9 by Mariordo - http://commons.wikimedia.org/wiki/File:Jurvetson_Google_driverless_car.jpg. Licensed under CC BY-SA 2.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Jurvetson_Google_driverless_car_trimmed.jpg#/media/File:Jurvetson_Google_driverless_car_trimmed.jpg


The technology of actually propelling the vehicle is changing as well.  Fully electric cars aren't very common yet, but Tesla Motors has proven their viability.  There are several models available.  Replacing a large, heavy, complex internal combustion engine and drivetrain with two or four simple electric motors on the wheels has great benefits.  It also reduces pollution and dependence on the limited oil supplies.  More common now, though, is the hybrid vehicle.  It combines electric assistance with an internal combustion engine to reduce fuel use.  Look around.  You will probably see several hybrids close by, and maybe even an electric car.  Electric is the wave of the future.

The changes in automotive technology over the last thirty five years have been dramatic and far reaching.  Computer and electronic technology has led that change, as with many other fields.  In the not-distant future, what we know as the automobile will not look much like what we grew up with.  Other than having four wheels, it is likely to be completely different.  What will it look like?  I don't think anyone can definitively answer than, but I have some predictions.

Let's look twenty years into the future. Limited oil supplies and pollution concerns will make the internal combustion engine a small niche market.  It will become nearly impossible to buy a standard passenger car with anything other than electric power.  That car will drive itself, perhaps with some possible input from the "driver."  It won't be driving alone, though.  It will be communicating and negotiating its actions with all the nearby cars and with the roadways themselves.  A car won't proceed through an intersection until the intersection gives it permission.  It will know where all the other cars around it are, and where it is safe to go without hitting one.  Since it will be much safer and much less likely to be involved in an accident, many of the safety features we know now will go away or decline in prominence.  That will allow the car to be lighter and roomier without becoming any larger.  Communication will overtake control of the car as the dominant feature set.  Costs will likely come down since most of the technology will be electronic, which historically as rapidly decreases in cost.  The materials used will be less and of lighter and cheaper construction.  The soccer mom in the mini van will be a lot less common: the kids can get to the game on their own since no driver will be required.

The automobile has changed the face humanity drastically over the last hundred years.  But the changes coming now will be in some ways even more dramatic.  I've made my predictions, but only time will tell how it actually turns out.  I'm interested to hear what you have to say about it all.  Leave a comment below.

Where are we headed?  Wherever we end up.

Monday, June 1, 2015

I See What You Did

Some strange things have been used as computer memory over the years.  Previously I have written about the vacuum tube memory of ENIAC and mercury delay line memory used in some early computers.  Here I want to tell you about a really odd memory technology that was used early on.

For many years, until very recently, the Cathode Ray Tube (CRT) monitor was the standard computer display, as well as television sets.  If you are over 21, you most likely remember these large, heavy  monitors.  You may also remember that if you turned off a CRT monitor or TV in a dark room, the screen would continue to glow for some time.  And you may even remember that the screen usually had a considerable static charge on it after being used.  It turns out that CRTs had some interesting properties.  What is interesting to us is that CRTs were used as early computer memory by taking advantage of one of those strange properties.  As a side note, the name Cathode Ray Tube dates back to the late 1800s when the principals of operation were first discovered.  It is rather incorrect, but has stuck around for over a hundred years despite that.  It is an interesting story how the name came about, but that's another story for another time.  Let's see how CRTs were used as memory.

The CRT is an interesting device.  It is a vacuum tube that works on the same principal as any other vacuum tube.  A large glass "tube" that is somewhat bell shaped is evacuated to cause a vacuum.  At the back (narrow) end is placed a cathode, or negative terminal.  The large, somewhat flat display area at the front has another terminal (anode) that is positive.  The cathode is heated to a point that causes the electrons to "fly off" the metal terminal and create a cloud of electrons around it.  Because of the difference in voltage applied to the cathode and anode, the cloud of electrons is attracted to anode and fly toward it in a stream.  They prefer the straightest, easiest path and without any coercing will hit the anode right in the middle.  Perhaps you have seen an older CRT monitor or TV when the power first comes on or goes off with a bright spot right in the middle.  The Cathode is created in a shape that minimizes the size of the electron stream and is therefore called an "electron gun."

We now have a stream of electrons from the cathode to the anode.  But how does it display a picture?  The anode (display) area is also covered with a phosphorescent chemical compound that glows  when struck by the electrons.  The intensity of the electron stream determines how bright it glows: stronger means brighter.  To get more than just a small spot in the center, the electron stream is deflected from side to side and up and down.  In most monitors and TVs the electrons are deflected by a magnetic field.  Around the sides of the tube, near the skinny cathode end, are some large coils of wire.  The coils have current passing through them that creates a magnetic field which attracts and repels the electrons, causing their path to curve.  By changing the strength of the field the beam of electrons can be made to hit any part of the screen.  In some types of CRT, normally used with oscilloscopes and "vector monitors," the coils of wire are replaced with large metal plates.  A voltage on those plates attracts and repels the beam similarly to the magnetic field.  The electrostatic plates are able to deflect the beam faster than changing the magnetic fields, so are used where high speed is needed.  But in either case, the beam is able to reach any point on the screen and create light and dark spots.  With a TV and most monitors, the beam is "scanned" from left to right to create a line, then from top to bottom to create a series of lines, creating an entire image on the phosphor.

That is neat and all, but how is it used as memory?  Before I get to that let me tell you that some of this information was found at radiomuseum.org and you can get more details and some neat pictures there.  Check them out for some really neat information on early radio and electronics technology.  So, to use something as a "memory" it has to be able to store information.  It turns out that when the electron beam strikes the anode, it leaves a small amount of electrostatic charge.  Remember I mentioned the static charge on a CRT?  You may have experienced that before.  The charge is localized to where the beam hits, and will bleed away in a short time.  But it can be detected if the beam hits the same spot again before it bleeds away. The voltage measured at the outside surface of that spot on the screen will be just a bit higher when the beam hits it again because of the added static charges.  A one or a zero can be stored for a short time by either lighting or not lighting a small area (or lighting a smaller or larger area) and then reading the voltage on a second scan.  By dividing the screen up into storage areas and placing a grid of voltage measuring contacts over the display, the screen can store data for a short time.

Most computers today use dynamic RAM (DRAM) that stores charge in a capacitor.  The charge will also bleed away in a short time and must be refreshed periodically by reading the data and rewriting it to keep the information.  Much like modern DRAM,  the CRT had to be refreshed.  The same was true with the mercury delay lines we looked at before.  So the electronics were made to feed back the read out signal back to the input and refresh the display.  An interesting point about CRT memory was that a second CRT could be added without the sensor grid over the face.  That one would have the screen visible and you could see the contents of the memory!  That is a great debugging tool.  A typical CRT could store a thousand or so bits.  A few dozen would be used to store a few kilobytes of data, which was a considerable improvement over the accumulators of ENIAC or the mercury delay lines.

I hope you are enjoying this series of posts.  Please leave some feedback so I know if this is interesting or not.  And if you think others may enjoy what I write, please spread the word.

Thanks.

Saturday, May 30, 2015

QuickSilver

The memory sizes of even cheap modern desktop and laptop computers would have been unthinkable to early computer builders.  Since I first started paying attention to the computer industry in the late 70s I have watched typical small computer memory go from 4 Kilobytes to 4 Gigabytes: an increase of over a million times.  I thought it would be interesting to look back at some of the technologies that have been used for computer memory and what their capacities were.

In the first post in this series I talked about the ENIAC memory.  The memory capacity of ENIAC was essentially 20 accumulator registers of ten decimal digits each.  They were made of vacuum tube flip flops and required 360 vacuum tubes for each accumulator.  The tubes were large, expensive, power-hungry, and unreliable.  To scale up memory sizes for larger stored-program computers, another solution was needed.

Radar had been developed during World War II to provide early warning of attacking aircraft.  Early systems had many problems, among them was "clutter" reflections.  A fixed object in view of the radar would create a "blip" on the screen every time the radar took a reading, causing a cluttered display that could make it hard to see incoming aircraft nearby.  One solution to the problem was to read the returned signal twice and subtract the second one from the first.  A reflection that occurred in both at the same place would be erased by the subtraction, but if the reflection had moved it would still show.  To do that required that the first signal be delayed by the time between signals so it would be available to subtract the second signal from it.

One way that was devised to delay the first signal was a "mercury delay line."  The reflection signal from the radar was sent to a transducer (basically a speaker) that was mated to one end of a tube filled with mercury.  The transducer would create a wave in the mercury that was a replica of the reflection signal.  At the other end of the mercury tube was another transducer (basically a microphone) that would pick up the wave and recreate the electrical signal.  The recreated signal would be delayed by the time it took the acoustic wave to travel the length of the mercury tube.  With careful design, a faithful reproduction could be created at exactly the right delay to match with the following reflection signal.  That reproduction could then have the second signal subtracted from it.

One of the researchers working on the radar problem was J. Presper Eckert from the Moore School of Electrical Engineering at the University of Pennsylvania.  If you paid attention in Computer History class in school that name probably sounds familiar.  That is the same J. Presper Eckert who was working on designing and building ENIAC at the same time.  Eckert realized that computers were going to need more storage space than ENIAC had and that the same mercury delay lines he was using in radar could be used.  He developed and patented the idea of not only using mercury delay lines, but also using other materials in a similar way.

Mercury delay line unit from Univac computer, 1951.
"Mercury memory". Licensed under CC BY-SA 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Mercury_memory.jpg#/media/File:Mercury_memory.jpg

Eckert went on to use the mercury delay lines in later stored program computers like the EDVAC and Univac I.  Each delay line could typically hold between 500 and 1000 bits.  The biggest problem with them was they were not random access.  They were sequential.  That meant you couldn't just get any bit you wanted out of it.  The bits came out the far end in the same order they went in.  The computer had to wait for a bit to travel the full length of the line, typically about 200 to 500 microseconds.  The system was built to recirculate the bits back to the input as they came out allowing them to be stored indefinitely unless intentionally changed.  Interestingly, the data was not stored electronically but as a sequence of waves traveling through the mercury.  The systems were complicated by several factors.  One big factor was temperature.  The speed of sound in the mercury (or other material) would change with temperature.  To keep the timing just right, they were typically heated to a fixed temperature and kept there.

Other types of delay lines with different properties were used as well.  A popular type used a metal wire.  Some special alloys were used with various properties that gave them advantages.  Often they were able to be coiled into a small space so that they could be made much smaller while holding the same or more data.  The famous computer scientist and mathematician Alan Turing even proposed using gin.  To my knowledge that was never used.

As odd as it may sound to us now, mercury delay line memory solved the problems early computer builders had.  They provided a relatively large storage that matched the capabilities of the early computers using technology available at the time. Delay lines of various sorts are still used for some purposes, although not for the main memory of computers.  Digital shift registers are very similar to an analog delay line like the mercury delay lines.  The very first Apple product, the Apple I, used shift registers as display memory.  They fit that application well since the data needs to be sequential to the display.

The delay lines were an ingenious solution to the memory problem.  But engineers had more tricks up their sleeves.  I will be writing about some of them in later entries.  If you find these posts interesting please spread the word to other people who may be interested.  Thanks.