Saturday, May 30, 2015

QuickSilver

The memory sizes of even cheap modern desktop and laptop computers would have been unthinkable to early computer builders.  Since I first started paying attention to the computer industry in the late 70s I have watched typical small computer memory go from 4 Kilobytes to 4 Gigabytes: an increase of over a million times.  I thought it would be interesting to look back at some of the technologies that have been used for computer memory and what their capacities were.

In the first post in this series I talked about the ENIAC memory.  The memory capacity of ENIAC was essentially 20 accumulator registers of ten decimal digits each.  They were made of vacuum tube flip flops and required 360 vacuum tubes for each accumulator.  The tubes were large, expensive, power-hungry, and unreliable.  To scale up memory sizes for larger stored-program computers, another solution was needed.

Radar had been developed during World War II to provide early warning of attacking aircraft.  Early systems had many problems, among them was "clutter" reflections.  A fixed object in view of the radar would create a "blip" on the screen every time the radar took a reading, causing a cluttered display that could make it hard to see incoming aircraft nearby.  One solution to the problem was to read the returned signal twice and subtract the second one from the first.  A reflection that occurred in both at the same place would be erased by the subtraction, but if the reflection had moved it would still show.  To do that required that the first signal be delayed by the time between signals so it would be available to subtract the second signal from it.

One way that was devised to delay the first signal was a "mercury delay line."  The reflection signal from the radar was sent to a transducer (basically a speaker) that was mated to one end of a tube filled with mercury.  The transducer would create a wave in the mercury that was a replica of the reflection signal.  At the other end of the mercury tube was another transducer (basically a microphone) that would pick up the wave and recreate the electrical signal.  The recreated signal would be delayed by the time it took the acoustic wave to travel the length of the mercury tube.  With careful design, a faithful reproduction could be created at exactly the right delay to match with the following reflection signal.  That reproduction could then have the second signal subtracted from it.

One of the researchers working on the radar problem was J. Presper Eckert from the Moore School of Electrical Engineering at the University of Pennsylvania.  If you paid attention in Computer History class in school that name probably sounds familiar.  That is the same J. Presper Eckert who was working on designing and building ENIAC at the same time.  Eckert realized that computers were going to need more storage space than ENIAC had and that the same mercury delay lines he was using in radar could be used.  He developed and patented the idea of not only using mercury delay lines, but also using other materials in a similar way.

Mercury delay line unit from Univac computer, 1951.
"Mercury memory". Licensed under CC BY-SA 3.0 via Wikimedia Commons - http://commons.wikimedia.org/wiki/File:Mercury_memory.jpg#/media/File:Mercury_memory.jpg

Eckert went on to use the mercury delay lines in later stored program computers like the EDVAC and Univac I.  Each delay line could typically hold between 500 and 1000 bits.  The biggest problem with them was they were not random access.  They were sequential.  That meant you couldn't just get any bit you wanted out of it.  The bits came out the far end in the same order they went in.  The computer had to wait for a bit to travel the full length of the line, typically about 200 to 500 microseconds.  The system was built to recirculate the bits back to the input as they came out allowing them to be stored indefinitely unless intentionally changed.  Interestingly, the data was not stored electronically but as a sequence of waves traveling through the mercury.  The systems were complicated by several factors.  One big factor was temperature.  The speed of sound in the mercury (or other material) would change with temperature.  To keep the timing just right, they were typically heated to a fixed temperature and kept there.

Other types of delay lines with different properties were used as well.  A popular type used a metal wire.  Some special alloys were used with various properties that gave them advantages.  Often they were able to be coiled into a small space so that they could be made much smaller while holding the same or more data.  The famous computer scientist and mathematician Alan Turing even proposed using gin.  To my knowledge that was never used.

As odd as it may sound to us now, mercury delay line memory solved the problems early computer builders had.  They provided a relatively large storage that matched the capabilities of the early computers using technology available at the time. Delay lines of various sorts are still used for some purposes, although not for the main memory of computers.  Digital shift registers are very similar to an analog delay line like the mercury delay lines.  The very first Apple product, the Apple I, used shift registers as display memory.  They fit that application well since the data needs to be sequential to the display.

The delay lines were an ingenious solution to the memory problem.  But engineers had more tricks up their sleeves.  I will be writing about some of them in later entries.  If you find these posts interesting please spread the word to other people who may be interested.  Thanks.

Memories

Chances are, the PC you are using to read this blog has at least 2 Gigabytes of RAM (Random Access Memory.)   And, it probably fits on one or two small modules that measure around 1 inch tall by 5 inches wide and less than 1/4 inch thick. A Gigabyte is 2^30 bytes (1,073,741,824) and each byte is composed of 8 bits (binary digits) for a total of 17,179,869,184 bits.  That is pretty amazing when you stop to think about it.  It wasn't that long ago that an amount of memory like that was almost unthinkable.  And that is considered a minimal computer today.  In addition, it most likely has about a megabyte (2^20, or 1,048,576 bytes) of flash ROM (Read Only Memory) to hold a permanent program and BIOS to start the machine up.  Thirty years ago, that alone would have been considered a large memory.

But computer memory hasn't always been so vast, small, and cheap.  Many technologies have been used in the past and it is interesting to take a look back and see how we got here.  Perhaps you have heard of magnetic core memory, which was king for about 20 years, from the mid 1950s to the mid 1970s.  But what else has been used for computer memory?  And how much did a computer have?

This entry is the first of several to take a look back at computer memory over the years.  I want to start with the ENIAC, generally considered to be the first general purpose electronic computer.  Almost all computers today are what is called a "stored program" type.  That means the program they run is held within their memory.  That wasn't always the case.  The ENIAC was programmed by wiring the instructions into changeable plugboards that were not accessed as memory.  That certainly decreased the memory requirements.  The stored program concept was developed while the ENIAC was being built, and it was later modified to work that way.

So, ENIAC didn't need memory to store its program.  It only needed memory to hold data.  In fact, it only needed memory to hold the data it was working on at that moment.  It had a card reader to get input to work on, and a card punch to output data to.  ENIAC was also different from almost all computers today in that it didn't use binary numbers.  All the numbers were decimal.  In essence, each decimal digit was represented by having ten vacuum tubes that could be either off or on.  One of those was turned on to represent which of ten digits (0 through 9) was held there.  That was somewhat inefficient, but it sure made numbers more convenient.  If you have ever had to convert numbers back and forth between binary and decimal, you will understand what a convenience that was.  And since ENIAC was designed strictly for numerical operations, it made things much simpler.

With that in mind, what did it have for memory?  Well, it didn't have any large chunk of storage memory like we take for granted now.  It had only twenty accumulators, what we would call registers now.  Each accumulator could hold a ten digit signed number, from -9,999,999,999 to +9,999,999,999.  It would take a thirty five bit binary number to hold the same range of values, so ENIAC had roughly 20*35 = 700 bits of memory, or 87 1/2 bytes!  It seems rather amazing today with typical computers available with billions of bytes that they could do anything useful with that amount of memory.  But they did.  One of the first practical uses of ENIAC was to do feasability calculations for the hydrogen bomb.  It went on from there to perform the calculations it was designed and built for: calculating artillery firing tables for the US Army.  It stayed in continuous service for the US Army until 1955, having several upgrades during its life.

The memory of ENIAC was of course made from vacuum tubes.  Each digit of each accumulator took 36 tubes, for a total of 360 per accumulator.    For all 20 accumulators, that meant 7,200 tubes: a sizeable portion of the less than 18,000 total of ENIAC.  If you have ever seen any old vacuum tube equipment, imagine how much space 7,200 of them with supporting components would take.  All for 87 1/2 bytes of memory.  The memory cycle time, or how long it took to access something in memory, was 200 microseconds.  That meant a maximum speed of 5,000 operations per second.  Your PC probably does more than a billion operations per second.  We've come a long way.

That is how we got started.  But just as now, the users of the computers wanted bigger and faster.  Even ENIAC would be upgraded in its eight year working life.  More memory was added and it was converted to a stored program machine.  But a lot of useful work was done, much faster than it could have been otherwise, by using that 87 1/2 bytes of memory.  In later posts I will cover some of the improvements that were developed later.  There are some wild and wacky things that were used for computer memory!

Wednesday, May 27, 2015

Driving

Driving a car isn't hard.  Most 8 year old kids can do it.  Start it, put it in gear, press the accelerator, and steer.  But driving well takes lots of practice.  We typically start with supervised learning at about age 15 and after a year or so get a driver's license.  Even then most people are marginal drivers, but continue to learn and improve for most of their lives.  Some learn quickly, others just never seem to get very good at it.  A lot depends on how much they try.

Programming is a lot like driving.  Someone interested in learning can get on the web and find any number of tutorials.  Within a couple of days they can be writing simple programs of their own.  That's a great confidence booster, and gives one a sense of power much like the first time they drive on an open road.  But just like driving, that is simply the start of the journey.  Learning to program well is a lifetime journey.  And again like driving, how good that person gets at programming depends a lot on how much effort is put into it.  What does it take to become a good programmer?

Many new programmers think that knowing a programming language makes them a programmer.  And that knowing several languages makes them a good programmer.  Nothing could be further from the truth.  Does knowing English make one a writer?  Does knowing English, French, and Russian make one a good writer?  Does owning a Ferrari make one a good driver?  Does owning a Ferrari and a Maserati make one a race driver?  Programming is about solving problems.  It matters little what language is used to express the solution.  Some types of cars are better suited to certain tasks, and certain languages are better suited to certain types of programming.  Part of being a good programmer is knowing how to choose and use the right tool for the task at hand.  For a small line following robot, perhaps C is the right language.  For a large artificial intelligence database, perhaps Lisp is better suited.  Learn the strengths and weaknesses of different approaches and languages, then choose the one that is right.  It is much easier to learn a new language when you have a problem to solve that suits that language.

If a person learns to drive in a small economy car, then suddenly finds themselves behind the wheel of a large transport truck, things aren't likely to go well.  The knowledge and skills learned in one only superficially transfer to the other.  Something like parallel parking in the small car will be essentially impossible in the truck.  Computers come in different sizes just like automobiles.  One may be a small microcontroller with a few kilobytes of flash memory and 64 bytes of RAM.  Another may be a desktop PC with several gigabytes of RAM and a couple of terabytes of hard disk space.  Programming techniques that work well and make sense on one rarely transfer well to the other.  On the microcontroller, it may make sense (or be necessary) to use small, simple data structures and write code to be simple and efficient.  On the PC, it may make more sense to use dynamic memory allocation for large data structures and have garbage collection to clean up what's no longer used.  I have seen many people who have some programming background on PCs with something like Python or Java try to transfer their techniques to and Arduino.  But creating an object of some "black box" class on a PC may use a few kilobytes of memory by itself, and the Arduino may only have a total of two kilobytes.  Obviously, that isn't going to work.  A different approach is needed.  Learn what works and doesn't work on they type of system you are trying to program.  Remember that different types of computers require different development strategies.

When learning to drive, we don't memorize every road and every turn we need to take, or every place we need to stop.  We learn to read road signs and remember rules to apply in new situations, like when to give another car the right of way.  Similarly, we don't try to remember the solution to every programming problem we may encounter.  We need to learn the "rules of the road" for the types of systems we use, and the types of programs we write.  Then apply those rules to the problem at hand, developing a new algorithm (method of solving a problem) for that particular problem.  A good way to pick up new ideas (rules) is by reading other people's code; if it is good code!  Unfortunately, there is a lot of terrible code on the web, especially in Arduino land.  Part of the learning process is learning to separate the good from the bad.  Books and dedicated programming websites are often a good way to learn the basics of good code.  Asking someone you know is a good programmer is also a good way to learn.  If you have someone like that available, ask them specific questions.  Don't ask "how should I write code?" unless you brought a lunch for both of you.  Keep in mind that every programmer has different ideas of what is good and bad.  If you ask ten programmers what makes good code, you will probably get at least twelve answers.

Programming is a very rewarding pursuit.  It opens up possibilities of power to control many, many things.  It is also a great career.  If you choose to take up programming, remember that it is never-ending journey.  Keep learning and improving your skills. After more than thirty five years, I am still learning new things every day.  There is more to say about it than any one person could say in a lifetime.  So look around for what interests you.  Read books, visit websites, watch videos, and ask others for tips and techniques or ideas.  Keep pushing forward.  But most of all, have fun.  I am!


Monday, May 25, 2015

Memorial Day

Today is Memorial Day here in the US.  Most of us will cook burgers and hot dogs on the grill, maybe go boating, and generally have a good time with family and friends.  But please remember why today is a holiday.

In the 240 years since the founding fathers declared independence from Great Britain, many countries and organizations have tried to take our freedom, and the freedom of others in other countries.    American service members have always stepped up to protect that freedom.  And many thousands of them have died defending that freedom.  Memorial Day is a day to recognize and remember those who have fallen performing that service.

Please enjoy your holiday as much as possible.  Those who gave their lives for your freedom would probably not want their death to be in vain.  But also please take a moment to thank them and remember them.  Say a prayer for them.  Remember that their ultimate sacrifice was for you and others around the world to be able to enjoy these days.

Sunday, May 24, 2015

Object Spaghetti

Every so often some new programming paradigm comes along  that is hyped to save us (programmers) from ourselves.  Most of them have at least some merit, and some are very good and stand the test of time.  For instance, structured programming was just a buzzword before the term buzzword became a buzzword.  But it made great promises and mostly has lived up to that promise.  It has lived up to it so much that most new programmers aren't even taught the term any more: it is simply acknowledged that programming will be done that way.

Object oriented programming has made some great promises and has been rather successful.  Not to the same extent that structured programming has, though.  And it has taken longer.  What many people don't realize is that object oriented programming is nearly as old as structured programming.  It made its first (by my minimal research) appearance in the Simula 67 almost 50 years ago.  But it didn't really begin to gain traction until the 80s and 90s with the introduction of Smalltalk 80, C++, and Java.  A large percentage of new languages being introduced are object oriented to one degree or another, and most of the older languages have had objects added to some extent (including BASIC, Pascal, Fortran, and even COBOL.)  Microsoft entered the arena with C#, which is very similar to Java in many ways, but I think much better.  I've heard it described as "Java done right."  That seems about right to me.  I write a lot of C# and rather like it.  But what has object oriented programming really done for us?  Is the success of OOP (object oriented programming) its own downfall?  What have we learned?  What have we forgotten?

If there is one common thread between programmers, it is that we are lazy.  We are always looking for the easy way out.  And we demand that our languages give us those easy ways.  Some nice language comes along from research that nice and clean and makes the theoretical computer scientists somewhat happy as well as the practicing programmers.  And if real people writing real code find it useful, and it is hip and cool, it will become successful.  One thing that makes it attractive to real programmers is when the language makes it relatively easy to do the all too frequent "uncommon" types of things.  Take a look at C.  Why was it so successful?  In part, because it gave a fairly clean structured language that still allowed you to do almost anything, including efficiently twiddle bits and move data around at a low level.  C became insanely popular.  It is still used for quite a few large programs.  It is the language of choice for the Linux kernel.  It is the closest thing to a standard for embedded systems.  Programmers like it.  It makes complex and off the wall things easy that other languages make difficult.

C# started out as a really nice, clean, object oriented language that both academia and real programmers could get behind.  It enforced some safety mechanisms but was still flexible enough to make programmers pretty happy.  And with Microsoft making it their language of choice, it almost had to become successful.  With that success has come the "wouldn't it be neat if" features.  But that isn't the root of the problem, just a symptom.

Wait a minute.  What problem?

Oh, yeah.  We have a problem.  And just like addiction, the first step to recovery is to admit you have a problem.  Our problem is that we are writing terrible code, but we are packaging it up nice and pretty with our World Saving OOP and our design patterns and calling it good.  And we are teaching whole generations of programmers, who have never heard the term structured programming, how to write spaghetti code.

One of the basic foundations of OOP is that it aids in data hiding.  Within a class or object, we hide away the data that object needs internally without exposing it to the outside world.  We can be sure it is in the state we want/need it to be in because nothing outside the object can access it without going through us.  Obviously, some of that data needs to be read or written to from outside.  Typically, we write accessor methods of some sort: getters and setters.  We don't expose the variable itself.  If someone wants to read a variable, we provide a getter method (ie getVelocity() ) that returns the value any way we want.  We are free to store it with whatever format makes the most sense, and pass it to anyone who needs it in a well-structured, fixed way.  Our getter method will do any translations needed.  If we need to set the variable, we write a setter method (ie setVelocity( double metersPerSecond) ) and check the incoming values for sanity before assigning the variable.  Again, we can do any processing required and store the value however makes the most sense.   That makes the velocity value accessible, but only in safe, controlled ways we authorize.  Nice.  Clean.  Safe.

But programmers are lazy.  That's a lot of little methods we have to write.  And then we have to call a method to get or set the value.  "It would be neat if" we could access it just like a variable.  Without all that method call typing.  But public variables are bad.  We can't just make them public.  The creators of C# used a really neat trick (I don't know who invented it.  But C# was the first place I encountered it.) that lets you safely access private variables like they are public, but safely: the property.

A property looks and works like a variable, but with backing code that acts like a method.  It's a really neat and useful idea.  You get the convenience of a public variable with the safety of using accessor methods.  Say you have a class Automobile and in that class you have a private variable _speed.  You want to be able to set the speed directly, but without exposing the variable.  You can create a property in the class like so:

public double Speed {
   get{ return _speed;}
   set{ if (value >= 0.0) _speed = value;}
};

The get clause allows read access to the property (and the backing private variable) and simply returns its value.  The set clause allows you to set the value, but only if it is not negative.  Of course, you can do much more than shown here.  You can then use the property just like a variable once you have instantiated an instance of the Automobile class:

Automobile maserati = new Automobile();

maserati.Speed = 185; // lost my license, now I don't drive

The Speed property looks and acts pretty much like a public variable except that it provides the same protection as getter and setter methods.  You can omit either the get or set clause if you only want to provide one or the other.  If the set clause was omitted above, you could read the value, but not set it.

But programmers are lazy.  Somewhere along the way somebody decided that was too much trouble.  They added a bit more syntactic sugar to the language.  Now the language will allow you to do this:

public double Speed{ get; set;};

and you don't even have to declare the backing variable.  It will create it for you.  This has become quite common.  It's simple and easy.  There is almost no code to write.  It is also very, very dangerous and very bad style.  It is, unfortunately, very common.  Programmers are lazy.  I'm waiting for someone to tell me how this is any different than a public variable.  There is absolutely nothing to prevent me from assigning any value to Speed.  I can now make my maserati go -4987.25 miles per hour.  Granted, we could have had the same effect without this new syntax by just doing this:

public Speed
{
   get{ return _speed;}
   set{ _speed = value;}
}

But I have rarely seen this.  Perhaps forcing the programmer to write out that much puts him in the mindset that he (or she) needs to put in some protection.  I don't know.  I just know programmers are lazy.  Look around the web for C# code.  I suspect you will find a lot of those constructs.  This is the stuff that OOP was created to prevent.  What are we doing?

I've seen plenty more stuff in OOP that throws me back to my Microsoft line numbered BASIC days.  One day soon I plan to rant about design patterns: the morphine of coding.  They are good when you need them, but very easy to abuse.

I've come to realize that any language can be used to write terrible code.  It takes a concerted effort to keep from falling into the trap.  But I am a firm believer that our programming languages should help us stay away from those traps rather than lead us to them.



Saturday, May 23, 2015

Stirring the Pot

Pond water gets stagnant.
Soup settles.
People get complacent (or lazy.)

Once in a while, things need to be stirred up.  I have found I'm pretty good at that. I like to stir the pot.  I also like to help people learn if I have something to offer them.  This blog is my newest outlet for doing those things.  Most of the content will be about computers and electronics, especially developing software and hardware.  One of my favorite subjects is embedded systems.  But on occasion I will likely throw in some politics, or religion, or physics, or whatever.  But I hope you find something interesting.  If not, well, there are plenty of other blogs out there to read.  I welcome all constructive criticism.  If you disagree with what I say, that's fine.  Please let me know.  If you are just being nasty I may respond in kind or simply ignore you.  Whatever I feel like that day.

You're still here?  OK, how about a little background on me before anything else.  I was born and raised mostly a country boy in the southeast United States.  I owned guns from the time I was 12 and could shoot pretty well.  I was kicked out of high school on my 16th birthday, in my third year of the 9th grade.  When I turned 18 a good friend got me a job as a laborer in the cabinet shop he worked at. I liked the job, but after a couple months I realized I was in a rut.  I didn't want to spend the rest of my life that way.  Serendipity stepped in and saved me.  A series of circumstances led me to the army recruiter and I enlisted.  I enjoyed it, but wanted to become an engineer.  I got out after three years and started college.  More strange circumstances led me back to the army after three years.  I spent seventeen more years there and retired in 2007.

During that time,  I got married.  The most wonderful, beautiful woman in all the universes had been managing a Captain D's restaurant I frequented when I was out of the army.  I enjoyed my time in the army and still miss it, but I had always wanted to be an engineer.  And now I wanted to settle down with my family.  So, after retirement, I went back to school again.  The only university close enough to me didn't offer electrical engineering, my preferred choice.  So I majored in physics, concentrating on engineering type classes.  I minored in computer science.

After graduating, I got a job in Washington state as an embedded software developer.  My whole family hated Washington.  But again, things worked out great and we moved back to Clarksville, TN, where we had left two and a half years earlier.  I now develop Windows software for a defense contractor.  Some pretty cool stuff.  Not your average desktop apps.  Kind of embedded.

All along the way I have been building things and writing code.  And there are a lot of things I see that bother me.  Much software, and to a lesser extent hardware, is poorly designed and implemented.  I frequent hobby electronics and robot sites, and see people making serious mistakes.  I know some of them will become actual engineers, and others will build and sell products made the same way.  I have made it somewhat of a personal mission to get the word out on how things can and should be: better designs, better implementations, better products.  Before the bad habits set in too deep.

With that said, let's dive in and stir the pot.

Many people, hobbyists and engineers, refuse to see what's right in front of their face.  Even if you rub their nose in it.  For instance, all the research that has been done shows that peer code reviews (having other people look through your code early on) save way more time than they cost.  They are proven to find bugs that would have cost much time (and money!) to fix later.  Yet companies refuse to conduct them, often because they "don't have the time."  Yet they spend much more time later fixing the bugs that would have most likely been found.

Another example I just experienced (again) today.  There are a lot of bad circuits around that often don't work at all, or seem to work mostly, or are dangerous.  For example, using a capacitor to "debounce" a switch connected to a microcontroller.  Those who have done actual research and posted articles on the web or published them in magazines show that these are bad and why.  But gently pointing to that research, with scope photos and all, gets nasty or curt comments about "it works for me" or "I just wanted simple" when they often obviously didn't even read the article.  I realize that hobbyists especially often don't understand all the subtleties in the articles, but the conclusions are well stated and obvious.  Yet they continue to use things that don't work.  Why?

I make my living writing software.  Arduino (and others like it) have opened the floodgates on people writing embedded software.  Many of those people, especially young ones, will become embedded or software engineers.  But they learn from other Arduino users, who probably learned on their own or from still other Arduino users.  What happens is, these people never get introduced properly to good engineering habits and practices.  If and when they become engineers, they will have developed lots of bad habits that will be nearly impossible to break.  They will design the products we use, and depend on.  That our very lives depend on.  Ask Toyota what happens when you don't adhere to good engineering practices.  After the loss of several human lives, and a couple of cool billion dollars so far, Toyota is realizing that cutting corners is expensive.

So that is where I'm coming from.  That's my stated agenda.  Follow along as I rant and rave about things.  It might be politics, or people, or preachers, or PCs.  But it will always stir the pot.

Thanks for reading this far.  I hope you will come back later.

Will