Friday, 24 October 2014

How C Programming Works? And How it is done in reality !!!!!



How C Programming Works?

The C programming language is incredibly popular, and it's easy to see why. Programming in C is efficient and gives the programmer a great deal of control. Many other programming languages like C++, Java and Python were developed using C.
                                                              The C programming language gives you more versatility                                                              than many other languages, including greater control over your computer's memory.
Chances are increasing each day that if you're a programmer, you won't use C exclusively for your work. However, there are several learning C is highly beneficial, even if you don't use it regularly. Here's why:
You'll be able to read and write code for software that can be used on many different types of computer platforms, including everything from small microcontrollers to desktop, laptop and mobile operating systems.
You'll better understand what high-level languages are doing behind the scenes, such as memory management and garbage collection. This understanding can help you write programs that work more efficiently.
If you're an information technology (IT) specialist, you could also benefit from learning C. IT professionals often write, maintain and run scripts as part of their job. A script is a list of instructions for a computer's operating system to follow. To run certain scripts, the computer sets up a controlled execution environment called ashell. Since most operating systems run shells based on C, C shell is a popular scripting adaptation of C used by IT pros.
This article covers the history behind C, looks at why C is so important, shows examples of some basic C code and explores some important features of C, including data types, operations, functions, pointers and memory management. Though this article isn't an instruction manual for programming in C, it does cover what makes C programming unique in a way that goes beyond those first few chapters of the average C programming guide.
Let's start by looking at where the C programming language came from, how it has developed and the role it has in software development today.

What is C?

The simplest way to define C is to call it a computer programming language, meaning you can write software with it that a computer can execute. The result could be a large computer application, like your Web browser, or a tiny set of instructions embedded in a microprocessor or other computer component.
The language C was developed in the early 1970s at Bell Laboratories, primarily credited to the work of Ken Thompson and Dennis Ritchie. Programmers needed a more user-friendly set of instructions for the UNIX operating system, which at the time required programs written in assembly language. Assembly programs, which speak directly to a computer's hardware, are long and difficult to debug, and they required tedious, time-consuming work to add new features [source: King].
Thompson's first attempt at a high-level language was called B, a tribute to the system programming language BCPL on which it was based. When Bell Labs acquired a Digital Equipment Corporation (DEC) UNIX system model PDP-11, Thompson reworked B to better fit the demands of the newer, better system hardware. Thus, B's successor, C, was born. By 1973, C was stable enough that UNIX itself could be rewritten using this innovative new higher-level language [source: King].
Before C could be used effectively beyond Bell Labs, other programmers needed a document that explained how to use it. In 1978, the book "The C Programming Language" by Brian Kernighan and Dennis Ritchie, known by C enthusiasts as K&R or the "White Book," became the definitive source for C programming. As of this writing, the second edition of K&R, originally published in 1988, is still widely available. The original, pre-standard version of C is called K&R C based on that book.
To ensure that people didn't create their own dialects over time, C developers worked through the 1980s to create standards for the language. The U.S. standard for C, American National Standards Institute (ANSI) standard X3.159-1989, became official in 1989. The International Organization for Standardization (ISO) standard, ISO/IEC 9899:1990, followed in 1990. The versions of C after K&R reference these standards and their later revisions (C89, C90 and C99). You might also see C89 referred to as "ANSI C," "ANSI/ISO C" or "ISO C."
C and its use in UNIX was just one part of the boom in operating system development through the 1980s. For all its improvements over its predecessors, though, C was still not effortless to use for developing larger software applications. As computers became more powerful, demand increased for an easier programming experience. This demand prompted programmers to build their own compilers, and thus their own new programming languages, using C. These new languages could simplify coding complex tasks with lots of moving parts. For example, languages like C++ and Java, both developed from C, simplified object-oriented programming, a programming approach that optimizes a programmer's ability to reuse code.
Now that you know a little background, let's look at the mechanics of C itself.

Editing and Compiling C Code

C is what's referred to as a compiled language, meaning you have to use a compiler to turn the code into an executable file before you can run it. The code is written into one or more text files, which you can open, read and edit in any text editor, such as Notepad in Windows, TextEdit on a Mac, and gedit in Linux. An executable file is something the computer can run (execute). The compiler checks the code for errors and, if it seems to be error-free, creates an executable file.
Before we look at what goes into the C code, let's be sure we can find and use a C compiler. If you're using Mac OS X and most Linux distributions (such as Ubuntu), you can add a C compiler to your computer if you install the development tools software for that particular OS. These free C compilers are command line tools, which means you'll typically run them from a command prompt in a terminal window. The command to run one of these C compilers is "cc" or "gcc" plus some command line options and arguments, which are other words typed after the command before you press Enter.
If you're using Microsoft Windows, or you would prefer to use a graphical user interface rather than a command line, you can install an integrated development environment (IDE) for C programming. An IDE is a single interface where you can write your code, compile it, test it and quickly find and fix errors. For Windows, you could purchase Microsoft Visual C++ software, an IDE for both C and C++ programming. Another popular IDE is Eclipse, a free Java-based IDE that runs on Windows, Mac and Linux and has extensions available for compiling C and many other programming languages.
For C, as for other computer programming languages, the version of the compiler you use is very important. You always want to use a version of the C compiler that's as new or newer than the version of the C language you're using in your program. If you're using an IDE, be sure to adjust your settings to make sure the IDE is using your target C version for the program you're working on. If you're at a command line, you can add a command line argument to change the version as in the following command:
gcc –std c99 –o myprogram.exe myprogram.c
In the command above, "gcc" is the call to run the compiler and everything else is a command line option or argument. The "-std" option was added followed by "c99" to tell the compiler to use the C99 standard version of C during its compiling. The "-o" option was added followed by "myprogram.exe" to request that the executable, the compiler's output file, to be named myprogram.exe. Without "-o" the executable is automatically named a.out instead. The final argument "myprogram.c" indicates the text file with the C code to be compiled. In short, this command is saying, "Hey, gcc, compile myprogram.c using the C99 C programming standard and put the results in a file named myprogram.exe." Browse the Web for a complete list of options you can use with your particular compiler, whether it's gcc or something else.
With your compiler installed, you're ready to program in C. Let's start by taking a look at the basic structure of one of the simplest C programs you could write.

The Simplest C Program

Let's look at a simple C program and use it both to understand the basics of C and the C compilation process. If you have your own computer with a C compiler installed as described earlier, you can create a text file named sample.c and use it to follow along while we step through this example. Note that if you leave off the .c in the file name, or if your editor appends .txt to the name, you'll probably get some sort of error when you compile it.
Here's our sample program:
/* Sample program */
#include <stdio.h>
int main()
{
printf("This is output from my first program!\n");
return 0;
}
When compiled and executed, this program instructs the computer to print out the line "This is output from my first program!" and then stop. You can't get much simpler than that! Now let's take a look at what each line is doing:
Line 1 -- This is one way to write comments in C, between /* and */ on one or more lines.
Line 2 -- The #include command tells the compiler to look at other sources for existing C code, particularly libraries, which are files that include common reusable instructions. The references a standard C library with functions for getting input from a user and for writing output to the screen. We'll look at libraries a more closely later.
Line 3 -- This line the first line of a function definition. Every C program has at least one function, or a block of code representing something the computer should do when the program runs. The function performs its task and then produces a byproduct, called a return value, that can be used by other functions. At a minimum, the program has a function called main like the one shown here with a return value with the data type int, which means integer. When we examine functions more later, you'll see what the empty parentheses mean.
Lines 4 and 7 -- The instructions within a function are enclosed in braces. Some programmers start and end a brace-enclosed block on separate lines as shown here. Others will put the open-brace ({) at the end of the first line of the function definition. Though lines of code in the program don't have to be typed on separate lines, programmers typically put each instruction on a separate line, indented with spaces, to make the code easier to read and edit later.
Line 5 -- This is a function call to a function named printf. That function is coded in the stdio.h library included from Line 1, so you don't have to write it yourself. This call to printf tells it what to print to the screen. The \n at the end, within the quotes, isn't printed, though; it's an escape sequence which instructs printf to move the cursor to the next line on the screen. Also, as you can see, every line in the function must end with a semi-colon.
Line 6 -- Every function that returns a value must include a return statement like this one. In C, the main function must always have an integer return type, even though it's not used within the program. Note that when you're running a C program, though, you're essentially running its main function. So, when you're testing the program, you can tell the computer to show the return value from running the program. A return value of 0 is preferred since programmers typically look for that value in testing to confirm the program ran successfully.
When you're ready to test your program, save the file and compile and run the program. If you're using the gcc compiler at a command line, and the program is in a file called sample.c, you can compile it with the following command:
gcc -o sample.exe sample.c
If there are no errors in the code, you should have a file named sample.exe in the same directory as sample.c after running this command. The most common error is a syntax error, meaning that you've mistyped something, such as leaving off a semicolon at the end of a line or not closing quotes or parentheses. If you need to make changes, open the file in your text editor, fix it, save your changes and try your compile command again.
To run the sample.exe program, enter the following command. Note the ./ which forces the computer to look at the current directory to find the executable file:
./sample.exe
Those are the basics of coding and compiling for C, though there's a lot more you can learn about compiling from other C programming resources. Now, let's open the box and see what pieces C has for building programs.

Common Programming Concepts in C

Let's take a look at how to put some of the common programming concepts into practice in your C code. The following is a quick summary of these concepts:
Functions -- As stated earlier, a function is a block of code representing something the computer should do when the program runs. Some languages call these structures methods, though C programmers don't typically use that term. Your program may define several functions and call those functions from other functions. Later, we'll take a closer look at the structure of functions in C.
Variables -- When you run a program, sometimes you need the flexibility to run the program without knowing what the values are ahead of time. Like other programming languages, C allows you to use variables when you need that flexibility. Like variables in algebra, a variable in computer programming is a placeholder that stands for some value that you don't know or haven't found yet.
Data types -- In order to store data in memory while your program is running, and to know what operations you can perform on that data, a programming language like C defines certain data types it will recognize. Each data type in C has a certain size, measured in binary bits or bytes, and a certain set of rules about what its bits represent. Coming up, we'll see how important it is choose the right data type for the task when you're using C.
Operations -- In C, you can perform arithmetic operations (such as addition) on numbers and string operations (such as concatenation) on strings of characters. C also has built-in operations specifically designed for things you might want to do with your data. When we check out data types in C, we'll take a brief look at the operations, too.
Loops -- One of the most basic things a programmer will want to do is repeat an action some number of times based on certain conditions that come up while the program is running. A block of code designed to repeat based on given conditions is called a loop, and the C language provides for these common loop structures: while, do/while, for, continue/break and goto. C also includes the common if/then/else conditionals and switch/case statements.
Data structures -- When your program has a lot of data to handle, and you need to sort or search through that data, you'll probably use some sort of data structure. A data structure is a structured way of representing several pieces of data of the same data type. The most common data structure is an array, which is just an indexed list of a given size. C has libraries available to handle some common data structures, though you can always write functions and set up your own structures, too.
Preprocessor operations -- Sometimes you'll want to give the compiler some instructions on things to do with your code before compiling it into the executable. These operations include substituting constant values and including code from C libraries (which you saw in the sample code earlier).
C also requires programmers to handle some concepts which many programming languages have simplified or automated. These include pointers, memory management, and garbage collection. Later pages cover the important things to know about these concepts when programming in C.
This quick overview of concepts may seem overwhelming if you're not already a programmer. Before you move on to tackle a dense C programming guide, let's take a user-friendly look at the core concepts among those listed above, starting with functions.

Functions in C

Most computer programming languages allow you to create functions of some sort. Functions let you chop up a long program into named sections so that you can reuse those sections throughout the program. Programmers for some languages, especially those using object-oriented programming techniques, use the term method instead of function.
Functions accept parameters and return a result. The block of code that comprises a function is its function definition. The following is the basic structure of a function definition:
<return type> <function name>(<parameters>)
{
<statements>
return <value appropriate for the return type>;
}
At a minimum, a C program has one function named main. The compiler will look for a main function as the starting point for the program, even if the main function calls other functions within it. The following is the main we saw in the simple C program we looked at before. It has a return type of integer, takes no parameters, and has two statements (instructions within the function), one of which is its return statement:
int main()
{
printf("This is output from my first program!\n");
return 0;
}
Functions other than main have a definition and one or more function calls. A function call is a statement or part of a statement within another function. The function call names the function it's calling followed by parentheses. If the function has parameters, the function call must include corresponding values to match those parameters. This additional part of the function call is called passing parameters to the function.
But what are parameters? A parameter for a function is a piece of data of a certain data type that the function requires to do its work. Functions in C can accept an unlimited number of parameters, sometimes called arguments. Each parameter added to a function definition must specify two things: its data type and its variable name within the function block. Multiple parameters are be separated by a comma. In the following function, there are two parameters, both integers:
int doubleAndAdd(int a, int b)
{
return ((2*a)+(2*b));
}
Next, let's continue our look at functions by zooming out to look at how they fit within a larger C program.

FUNCTION DECLARATIONS
In C, you'll probably hear the term function declaration more than function prototype, especially among older C programmers. We're using the term function prototype in this article, though, because it has an important distinction. Originally, a function declaration did not require any parameters, so the return type, function name and a pair of empty parentheses were sufficient. A function prototype, though, gives the compiler important additional information by including the number and data types of the parameters it will call. Prototypes have become a best practice approach among coders today, in C and other programming languages.

Function Prototypes
In C, you can add a function definition anywhere within the program (except within another function). The only condition is that you must tell the compiler in advance that the function exists somewhere later in the code. You'll do this with a function prototype at the beginning of the program. The prototype is a statement that looks similar to the first line of the definition. In C, you don't have to give the names of the parameters in the prototype, only the data types. The following is what the function prototype would look like for the doubleAndAdd function:
int doubleAndAdd(int, int);
Imagine function prototypes as the packing list for your program. The compiler will unpack and assemble your program just as you might unpack and assemble a new bookshelf. The packing list helps you ensure you have all the pieces you need in the box before you start assembling the bookshelf. The compiler uses the function prototypes in the same way before it starts assembling your program.
If you're following along with the sample.c program we looked at earlier, open and edit the file to add a function prototype, function definition and function call for the doubleAndAdd function shown here. Then, compile and run your program as before to see how the new code works. You can use the following code as a guide to try it out:
#include <stdio.h>
int doubleAndAdd(int, int);
int main()
{
printf("This is output from my first program!\n");
printf("If you double then add 2 and 3, the result is: %d \n", doubleAndAdd(2,3));
return 0;
}
int doubleAndAdd(int a, int b)
{
return ((2*a)+(2*b));
}
So far we've looked at some basic structural elements in a C program. Now, let's look at the types of data you can work with in a C program and what operations you can perform on that data.


Data Types and Operations in C

From your computer's point of view, your program is all just a series of ones and zeros. Data types in C tell the computer how to use some of those bits.
From your computer's perspective, data is nothing but a series of ones and zeros representing on and off states for the electronic bits on your hard drive or in your computer's processor or memory. It's the software you're running on a computer that determines how to make sense of those billions of binary digits. C is one of few high-level languages that can easily manipulate data at the bit level in addition to interpreting the data based on a given data type.
A data type is a small set of rules that indicate how to make sense of a series of bits. The data type has a specific size plus its own way of performing operations (such as adding and multiplying) on data of that type. In C, the size of the data type is related to the processor you're using. For example, in C99, a piece of data of the integer data type (int) is 16 bits long in a 16-bit processor while for 32-bit and 64-bit processors it's 32 bits long.
Another important thing for C programmers to know is how the language handles signed and unsigned data types. A signed type means that one of its bits is reserved as the indicator for whether it's a positive or negative number. So, while an unsigned int on a 16-bit system can handle numbers between 0 and 65,535, a signed in on the same system can handle numbers between -32,768 and 32,767. If an operation causes an int variable to go beyond its range, the programmer has to handle the overflow with additional code.
Given these constraints and system-specific peculiarities in C data types and operations, C programmers must choose their data types based on the needs of their programs. Some of the data types they can choose are the primitive data types in C, meaning those built in to the C programming language. Look to your favorite C programming guide for a complete list of the data types in C and important information about how to convert data from one type to another.
C programmers can also create data structures, which combine primitive data types and a set of functions that define how the data can be organized and manipulated. Though the use of data structures is an advanced programming topic and beyond the scope of this article, we will take a look at one of the most common structures: arrays. An array is a virtual list containing pieces of data that are all the same data type. An array's size can't be changed, though its contents can be copied to other larger or smaller arrays.
Though programmers often use arrays of numbers, character arrays, called strings, have the most unique features. A string allows you to save something you might say (like "hello") into a series of characters, which your C program can read in from the user or print out on the screen. String manipulation has such a unique set of operations, it has its own dedicated C library (string.h) with your typical string functions.
The built-in operations in C are the typical operations you'd find in most programming languages. When you're combining several operations into a single statement, be sure to know the operator precedence, or the order in which the program will perform each operation in a mathematical expression. For example, (2+5)*3 equals 21 while 2+5*3 equals 17, because C will perform multiplication before addition unless there are parentheses indicating otherwise.
If you're learning C, make it a priority to familiarize yourself with all of its primitive data types and operations and the precedence for operations in the same expression. Also, experiment with different operations on variables and numbers of different data types.
At this point, you've scratched the surface of some important C basics. Next, though, let's look at how C enables you to write programs without starting from scratch every time.

 

Don't Start from Scratch, Use Libraries

Libraries are very important in C because the C language supports only the most basic features that it needs. For example, C doesn't contain input-output (I/O) functions to read from the keyboard and write to the screen. Anything that extends beyond the basics must be written by a programmer. If the chunk of code is useful to multiple different programs, it's often put into a library to make it easily reusable.
In our discussion of C so far, we've already seen one library, the standard I/O (stdio) library. The #include line at the beginning of the program instructed the C compiler to loaded the library from its header file named stdio.h. C maintainers include standard C libraries for I/O, mathematical functions, time manipulation and common operations on certain data structures, such as a string of characters. Search the Web or your favorite C programming guide for information about the C89 standard library and the updates and additions in C99.
You, too, can write C libraries. By doing so, you can split your program into reusable modules. This modular approach not only makes it easy to include the same code in multiple programs, but it also makes for shorter program files which are easier to read, test and debug.
To use the functions within a header file, add a #include line for it at the beginning of your program. For standard libraries, put the name of the library's corresponding header file between greater-than and less-than signs (). For libraries you create yourself, put the name of the file between double quotes. Unlike statements in other parts of your C program, you don't have to put a semicolon at the end of each line. The following shows including one of each type of library:
#include <math.h>
#include "mylib.h"
A comprehensive C programming source should provide the instructions you need to write your own libraries in C. The function definitions you'll write are not any different whether they're in a library or in your main program. The difference is that you'll compile them separately in something called an object file (with a name ending in .o), and you'll create a second file, called a header file (with a name ending in .h) which contains the function prototypes corresponding to each function in the library. It's the header file you'll reference in your #include line in each main program that uses your library, and you'll include the object file as an argument in the compiler command each time you compile that program.
The C features we've explored so far are typical in other programming languages, too. Next, though, we'll talk about how C manages your computer's memory.

 

Some Pointers about Pointers in C

When your C program is loaded into memory (typically the random-access memory, or RAM, in your computer), each piece of the program is associated with an address in memory. This includes the variables you're using to hold certain data. Each time your program calls a function, it loads that function and all of its associated data into memory just long enough to run that function and return a value. If you pass parameters to the function, C automatically makes a copy of the value to use in the function.
Sometimes when you run a function, though, you want to make some permanent change to the data at its original memory location. If C makes a copy of data to use in the function, the original data remains unchanged. If you want to change that original data, you have to pass a pointer to its memory address (pass by reference) instead of passing its value to the function (pass by value).
Pointers are used everywhere in C, so if you want to use the C language fully you have to have a good understanding of pointers. A pointer is a variable like other variables, but its purpose is to store the memory address of some other data. The pointer also has a data type so it knows how to recognize the bits at that memory address.
When you look at two variables side-by-side in C code, you may not always recognize the pointer. This can be a challenge for even the most experienced C programmers. When you first create a pointer, though, it's more obvious because there must be an asterisk immediately before the variable name. This is known as the indirection operator in C. The following example code creates an integer i and a pointer to an integer p:
int i;
int *p;
Currently there is no value assigned to either i or p. Next, let's assign a value to i and then assign p to point to the address of i.
i = 3;
p = &i;
Here you can see the ampersand (&) used as the address operator immediately before i, meaning the "address of i." You don't have to know what that address is to make the assignment. That's good, because it will likely be different every time you run the program! Instead, the address operator will determine the address associated with that variable while the program is running. Without the address operator, the assignment p=i would assign p the memory address of 3, literally, rather than the memory address of the variable i.
Next, let's look at how you can use pointers in C code and the challenges you'll want to be prepared for.

 

Using Pointers Correctly in C

If you want to become proficient in C programming, you'll need a firm grasp of how to effectively use pointers in your code.
Once you have a pointer, you can use that in place of a variable of the same data type in operations and function calls. In the following example, the pointer to i is used instead of i within a larger operation. The asterisk used with the p (*p) indicates that the operation should use the value that p is pointing to at that memory address, not the memory address itself:
int b;
b = *p + 2;
Without pointers, it's nearly impossible to divide tasks into functions outside of main in your C program. To illustrate this, consider you've created a variable in main called h that stores the user's height to the nearest centimeter. You also call a function you've written named setHeight that prompts the user to set that height value. The lines in your main function might look something like this:
int h;
setHeight(h); /* There is a potential problem here. */
This function call will try to pass the value of h to setHeight. However, when the function finishes running, the value of h will be unchanged because the function only used a copy of it and then discarded it when it finished running.
If you want to change h itself, you should first ensure that the function can take a pointer to an existing value rather than a new copy of a value. The first line of setHeight, then, would use a pointer instead of a value as its parameter (note the indirection operator):
setHeight(int *height) { /* Function statements go here */ }
Then, you have two choices for calling setHeight. The first is to use the address operator for h as the passed parameter (&h). The other is to create a separate pointer to h and pass that instead. The following shows both options:
setHeight(&h); /* Pass the address of h to the function */
int *p;
p = &h;
setHeight(p); /* Pass a separate pointer to the address of h to the function */
The second option reveals a common challenge when using pointers. The challenge is having multiple pointers to the same value. This means that any change in that one value affects all its pointers at once. This could be a good or bad thing, depending on what you're trying to accomplish in your program. Again, mastering the use of pointers is an important key to mastering C programming. Practice with pointers as much as possible so you'll be ready to face these challenges.
The C features we've explored so far are typical in other programming languages, too. Next, though, we'll look at C's demands for careful memory management.

 

The Importance of Memory Management in C

 One of the things that makes C such a versatile language is that the programmer can scale down a program to run with a very small amount of memory. When C was first written, this was an important feature because computers weren't nearly as powerful as they are today. With the current demand for small electronics, from mobile phones to tiny medical devices, there's a renewed interest in keeping the memory requirements small for some software. C is the go-to language for most programmers who need a lot of control over memory usage.
To better understand the importance of memory management, consider how a program uses memory. When you first run a program, it loads into your computer's memory and begins to execute by sending and receiving instructions from the computer's processor. When the program needs to run a particular function, it loads that function into yet another part of memory for the duration of its run, then abandons that memory when the function is complete. Plus, each new piece of data used in the main program takes up memory for the duration of the program.
If you want more control over all this, you need dynamic storage allocation. C supports dynamic storage allocation, which is the ability to reserve memory as you need it and free that memory as soon as you're finished using it. Many programming languages have automatic memory allocation and garbage collection that handle these memory management tasks. C, though, allows (and in some cases requires) you to be explicit about memory allocation with the following key functions from the standard C library:
·         malloc -- Short for memory allocation, malloc is used to reserve a block of memory of a given size to story a certain type of data your program needs to process. When you use malloc, you're creating a pointer to the allocated memory. This isn't necessary for a single piece of data, such as one integer, which is allocated as soon as you first declare it (as in int i). However, it is an important part of creating and managing data structures such as arrays. Alternate memory allocation options in C are calloc, which also clears the memory when it's reserved, and realloc, which resizes previously reserved memory.
·         free -- Use free to force your program to free the memory previously assigned to a given pointer.
Best practice when using malloc and free is that anything you allocate should be freed. Whenever you allocate something, even in a temporary function, it remains in memory until the operating system cleans up the space. To ensure that memory is free and ready to use immediately, though, you should free it before the current function exits. This memory management means you can keep your program's memory footprint to a minimum and avoid memory leaks. A memory leak is a program flaw in which it continues using more and more memory until there's none left to allocate, causing the program to stall or crash. On the other hand, don't get so anxious about freeing memory that you free up, and thus lose, something that you need later in the same function.
Throughout this article, you've learned some of the basic structure and core concepts of the C programming language. We've looked at its history, the characteristics it has in common with other programming languages and the important features that make it a unique and versatile option for coding software. Launch over to the next page for lots more information, including some programming guides that will carry you further on your journey into C.




Mac, ENIAC or UNIVAC: The Computer History Quiz

Mac, ENIAC or UNIVAC: The Computer History Quiz



Open the given link and start appearing the Quiz:


                                   http://computer.howstuffworks.com/computer-history-quiz.htm

How will biometrics affect our privacy?



How will biometrics affect our privacy?

We've all seen movies in which a character has a retinal scan to prove his or her identity before walking into a top-secret installation. That's an example of a biometric system. In general, biometrics is a collection of measures of human physiology and behavior. A biometric system could scan a person's fingerprint or analyze the way he or she types on a keyboard. The purpose of most biometric systems is to authenticate a person's claimed identity.

Computer Accessories Image Gallery
Fingerprint scanners are a popular type of biometric system.

Biometrics tend to be more convenient than other methods of identity authentication. You might forget your ID at home when you head out the door, but you'll still be able to use biometric devices. Imagine verifying your identity while at the store by swiping your finger across a sensor.

But along with convenience and security comes a concern for privacy. For biometrics to work, there needs to be a database containing the relevant information for each individual authorized by the system. For example, at that top-secret installation, every employee's biometric signature would have to be recorded so that the scanners could verify each person's identity.

This might not present much of a problem on its own. If the only data the system stores relates to the actual biometric measurements, privacy violations are at a minimum. But by their very nature, biometric systems collect more information than just the users' fingerprints, retinal patterns or other biometric data. At a basic level, most systems will record when and where a person is at the time of a scan.


I Recognize That Face

You might think of fingerprint or retinal scanners when you hear the word biometrics, but the term has a broader definition. Facial recognition technology falls into the biometric category. There are already several cameras on the market that can detect faces. A few are able to recognize and remember a group of faces. You just take a picture of a friend, tag the photo and the camera will automatically tag any future photos of that friend. It's both cool and creepy.

  

Biometric systems with cameras may use facial recognition software or study the way you move to identify you.

Imagine using this technology in public places to identify the people passing through. For example, a major city might install cameras at high-traffic areas to scan for terrorists or identify criminals. While the motivation for using that technology might be pure, it creates difficult privacy issues. The city would have a record of everyone who passed through that neighborhood. The technology treats everyone as a suspect as if it's only a matter of time before each of us commits a crime.
And what happens if the technology makes a mistake and misidentifies someone? Weather conditions, clothing, hairstyles and even the cleanliness of the lens could affect the ability of the camera to identify people. Critics might ask: Why install a system that's unreliable?

What happens if a person suffers an illness or injury that changes his or her appearance? Such a change could present problems with biometrics. Adjusting the biometric system to accommodate the change could also result in a violation of the user's privacy. The system administrator now knows more details about the user.
A society with pervasive biometric systems would make anonymity a virtual impossibility. Should that society become oppressive or otherwise abusive to the population, the citizens would have few opportunities to react without revealing their own identities.
Groups like the Biometrics Institute are aware of privacy concerns and strive to create processes to limit the chance for biometric applications to violate a person's privacy. Other groups advocate that companies, governments and other organizations conduct a privacy assessment before installing a biometric system. With vigilance and caution, we may find a way to incorporate biometrics into our lives and still maintain our privacy.


Sunday, 19 October 2014

The History of Microprocessor and Personal Computer

 The History of Microprocessor and Personal Computer

The personal computing business as we know it owes itself to an environment of enthusiasts, entrepreneurs and happenstance. Before PCs, the mainframe and minicomputer business model was formed around a single company providing an entire ecosystem; from building the hardware, installation, maintenance, writing the software, and training operators.
This approach would serve its purpose in a world that seemingly required few computers. It made the systems hugely expensive yet highly lucrative for the companies involved since the initial cost and service contract ensured a steady stream of revenue. The "big iron" companies weren't the initial driving force in personal computing because of cost, lack of off-the-shelf software, perceived lack of need for individuals to own computers, and the generous profit margins afforded from mainframe and minicomputer contracts.
It was in this atmosphere that personal computing began with hobbyists looking for creative outlets not offered by their day jobs involving the monolithic systems. The invention of the microprocessor, DRAM, and EPROM integrated circuits would spark the widespread use of the BASIC high-level language variants, which would lead to the introduction of the GUI and bring computing to the mainstream. The resulting standardization and commoditization of hardware would finally make computing relatively affordable for the individual.
Over the next few weeks we'll be taking an extensive look at the history of the microprocessor and the personal computer, from the invention of the transistor to modern day chips powering a multitude of connected devices.

1947 - 1974: Foundations

Leading up to Intel's 4004, the first commercial microprocessor

Early personal computing required enthusiasts to have skills in both electrical component assembly (predominantly the ability to solder) and machine coding, since software at this time was a bespoke affair where it was available at all.
The established commercial market leaders didn't take personal computing seriously because of limited input-output functionality and software, a dearth of standardization, high user skill requirement, and few envisaged applications. Intel's own engineers had lobbied for the company to pursue a personal computing strategy almost as soon as the 8080 started being implemented in a much wider range of products than originally foreseen. Steve Wozniak would plead with his employer, Hewlett-Packard, to do the same.

John Bardeen, William Shockley and Walter Brattain at Bell Labs, 1948.
While hobbyists initiated the personal computing phenomenon, the current situation is largely an extension of the lineage that began with work by Michael Faraday, Julius Lilienfeld, Boris Davydov, Russell Ohl, Karl Lark-Horovitz, to William Shockley, Walter Brattain, John Bardeen, Robert Gibney, and Gerald Pearson, who co-developed the first transistor (a conjugation of transfer resistance) at Bell Telephone Labs in December 1947.
Bell Labs would continue to be a prime mover in transistor advances (notably the Metal Oxide Semiconductor transistor, or MOSFET in 1959) but granted extensive licensing in 1952 to other companies to avoid anti-trust sanctions from the U.S. Department of Justice. Thus Bell and its manufacturing parent, Western Electric, were joined by forty companies including General Electric, RCA, and Texas Instruments in the rapidly expanding semiconductor business. Shockley would leave Bell Labs and start Shockley Semi-Conductor in 1956.

The first transistor ever assembled, invented by Bell Labs in 1947
An excellent engineer, Shockley's caustic personality allied with his poor management of employees doomed the undertaking in short order. Within a year of assembling his research team he had alienated enough members to cause the mass exodus of "The Traitorous Eight", which included Robert Noyce and Gordon Moore, two of Intel's future founders, Jean Hoerni, inventor of the planar manufacturing process for transistors, and Jay Last. Members of The Eight would provide the nucleus of the new Fairchild Semiconductor division of Fairchild Camera and Instrument, a company that became the model for the Silicon Valley start-up.
Fairchild company management would go on to increasingly marginalize the new division because of focus on profit from high profile transistor contracts such as those used in the IBM-built flight systems of the North American XB-70 Valkyrie strategic bomber, the Autonetics flight computer of the Minuteman ICBM system, CDC 6600 supercomputer, and NASA's Apollo Guidance Computer.
While hobbyists initiated the personal computing phenomenon, the current situation is largely an extension of the lineage that began with work on early semiconductors in the late 1940s.
However, profit declined as Texas Instruments, National Semiconductor, and Motorola gained their share of contracts. By late 1967, Fairchild Semiconductor had become a shadow of its former self as budget cuts and key personnel departures began to take hold. Prodigious R&D acumen wasn't translating into commercial product, and combative factions within management proved counter-productive to the company.

The Traitorous Eight who quit Shockley to start Fairchild Semiconductor. From left: Gordon Moore, Sheldon Roberts, Eugene Kleiner, Robert Noyce, Victor Grinich, Julius Blank, Jean Hoerni, Jay Last. (Photo © Wayne Miller/Magnum)
Chief among those to leave would be Charles Sporck, who revitalized National Semiconductor, as well as Gordon Moore and Robert Noyce. While over fifty new companies would trace their origins from the breakup of Fairchild's workforce, none achieved so much as the new Intel Corporation in such a short span. A single phone call from Noyce to Arthur Rock, the venture capitalist, resulted in the $2.3 million start-up funding being raised in an afternoon.
The ease with which Intel was brought into existence was in large part due to the stature of Robert Noyce and Gordon Moore. Noyce is largely credited with the co-invention of the integrated circuit along with Texas Instrument's Jack Kilby, although he almost certainly borrowed very heavily from earlier work carried out by James Nall and Jay Lathrop's team at the U.S. Army's Diamond Ordnance Fuze Laboratory (DOFL), which produced the first transistor constructed using photolithography and evaporated aluminum interconnects in 1957-59, as well as Jay Last's integrated circuit team (including the newly acquired James Nall) at Fairchild, of which Robert Noyce was project chief.

First planar IC (Photo © Fairchild Semiconductor).
Moore and Noyce would take with them from Fairchild the new self-aligned silicon gate MOS (metal oxide semiconductor) technology suitable for manufacturing integrated circuit that had recently been pioneered by Federico Faggin, a loanee from a joint venture between the Italian SGS and Fairchild companies. Building upon the work of John Sarace's Bell Labs team, Faggin would take his expertise to Intel upon becoming a permanent U.S. resident.
Fairchild would rightly feel aggrieved over the defection, as it would over many employee breakthroughs that ended up in the hands of others -- notably National Semiconductor. This brain drain was not quite as one sided as it would appear, since Fairchild's first microprocessor, the F8, in all likelihood traced its origins to Olimpia Werke's unrealized C3PF processor project.
In an era when patents had yet to assume the strategic importance they have today, time to market was of paramount importance and Fairchild was often too slow in realizing the significance of its developments. The R&D division became less product-orientated, devoting sizable resources to research projects.
Texas Instruments, the second largest integrated circuit producer, quickly eroded Fairchild's position as market leader. Fairchild still held a prominent standing in the industry, but internally, the management structure was chaotic. Production quality assurance (QA) was poor by industry standards with yields of 20% being common.
Over fifty new companies would trace their origins from the breakup of Fairchild's workforce; none achieved so much as the new Intel Corp in such a short span.
While engineering employee turnover increased as "Fairchildren" left for more stable environments, Fairchild's Jerry Sanders moved from aerospace and defense marketing to overall director of marketing and unilaterally decided to launch a new product every week -- the "Fifty-Two" plan. The accelerated time to market would doom many of these products to yields of around 1%. An estimated 90% of the products shipped later than scheduled, carried defects in design specification, or both. Fairchild's star was about to be eclipsed.
If Gordon Moore and Robert Noyce's stature gave Intel a flying start as a company, the third man to join the team would become both the public face of the company and its driving force. Andrew Grove, born András Gróf in Hungary in 1936, became Intel's Director of Operations despite having little background in manufacturing. The choice seemed perplexing on the surface -- even allowing for his friendship with Gordon Moore -- as Grove was an R&D scientist with a background in chemistry at Fairchild and a lecturer at Berkeley with no experience in company management.
The fourth man in the company would define its early marketing strategy. Bob Graham was technically the third employee of Intel, but was required to give three months' notice to his employer. The delay in moving to Intel would allow Andy Grove to acquire a much larger management role than originally envisaged.

Intel's first hundred employees pose outside the company’s Mountain View, California, headquarters, in 1969.
(Source: Intel / Associated Press)
An excellent salesman, Graham was seen as one of two outstanding candidates for the Intel management team -- the other, W. Jerry Sanders III, was a personal friend of Robert Noyce. Sanders was one of the few Fairchild management executives to retain their jobs in the wake of C. Lester Hogan's appointment as CEO (from an irate Motorola).
Sanders' initial confidence at remaining Fairchild's top marketing man evaporated quickly as Hogan grew unimpressed with Sanders' flamboyancy and his team's unwillingness to accept small contracts ($1 million or less). Hogan effectively demoted Sanders twice in a matter of weeks with the successive promotions of Joseph Van Poppelen and Douglas J. O'Conner above him. The demotions achieved what Hogan had intended -- Jerry Sanders resigned and most of Fairchild's key positions were occupied by Hogan's former Motorola executives.
Within weeks Jerry Sanders had been approached by four other ex-Fairchild employees from the analog division interested in starting up their own business. As originally conceived by the four, the company would produce analog circuits since the breakup (or meltdown) of Fairchild was fostering a vast number of start-ups looking to cash in on the digital circuit craze. Sanders joined on the understanding that the new company would also pursue digital circuits. The team would have eight members, including Sanders, Ed Turney, one of Fairchild's best salesmen, John Carey, and chip designer Sven Simonssen as well as the original four analog division members, Jack Gifford, Frank Botte, Jim Giles, and Larry Stenger.
Advanced Micro Devices, as the company would be known, got off to a rocky start. Intel had secured funding in less than a day based on the company being formed by engineers, but investors were much more reticent when faced with a semiconductor business proposal headed by marketing executives. The first stop in securing AMD's initial $1.75 million capital was Arthur Rock who had supplied funding for both Fairchild Semiconductor and Intel. Rock declined to invest, as would a succession of possible money sources.
Eventually, Tom Skornia, AMD's newly minted legal representative arrived at Robert Noyce's door. Intel's co-founder would thus become one of the founding investors in AMD. Noyce's name on the investor list added a degree of legitimacy to the business vision that AMD had so far lacked in the eyes of possible investors. Further funding followed, with the revised $1.55 million target reached just before the close of business on June 20, 1969.
AMD got off to a rocky start. But Intel's Robert Noyce becoming one of the company's founding investors added a degree of legitimacy to its business vision in the eyes of possible investors.
Intel's formation was somewhat more straightforward allowing the company to get straight to business once funding and premises were secured. Its first commercial product was also one of the five notable industry "firsts" accomplished in less than three years that were to revolutionize both the semiconductor industry and the face of computing.
Honeywell, one of the computer vendors that lived within IBM's vast shadow, approached numerous chip companies with a request for a 64-bit static RAM chip.
Intel had already formed two groups for chip manufacture, a MOS transistor team led by Les Vadász, and a bipolar transistor team led by Dick Bohn. The bipolar team was first to achieve the goal, and the world's first 64-bit SRAM chip was handed over to Honeywell in April 1969 by its chief designer, H.T. Chua. Being able to produce a successful first up design for a million dollar contract would only add to Intel's early industry reputation.

Intel’s first product, a 64-bit SRAM based on the newly developed Schottky Bipolar technology. (CPU-Zone)
In keeping with the naming conventions of the day, the SRAM chip was marketed under its part number, 3101. Intel, along with virtually all chipmakers of the time did not market their products to consumers, but to engineers within companies. Part numbers, especially if they had significance such as the transistor count, were deemed to appeal more to their prospective clients. Likewise, giving the product an actual name could signify that the name masked engineering deficiencies or a lack of substance. Intel tended to only move away from numerical part naming when it became painfully apparent that numbers couldn't be copyrighted.
While the bipolar team provided the first breakout product for Intel, the MOS team identified the main culprit in its own chips failings. The silicon-gate MOS process required numerous heating and cooling cycles during chip manufacturing. These cycles caused variations in the expansion and contraction rate between the silicon and the metal oxide, which led to cracks that broke circuits in the chip. Gordon Moore's solution was to "dope" the metal oxide with impurities to lower its melting point allowing the oxide to flow with the cyclic heating. The resulting chip that arrived in July 1969 from the MOS team (and an extension of the work done at Fairchild on the 3708 chip) became the first commercial MOS memory chip, the 256-bit 1101.
Honeywell quickly signed up for a successor to the 3101, dubbed the 1102, but early in its development a parallel project, the 1103, headed by Vadász with Bob Abbott, John Reed and Joel Karp (who also oversaw the 1102's development) showed considerable potential. Both were based on a three-transistor memory cell proposed by Honeywell's William Regitz that promised much higher cell density and lower manufacturing cost. The downside was that the memory wouldn't retain information in an unpowered state and the circuits would need to have voltage applied (refreshed) every two milliseconds.

The first MOS memory chip, Intel 1101, and first DRAM memory chip, Intel 1103. (CPU-Zone)
At the time, computer random access memory was the province of magnetic-core memory chips. This technology was rendered all but obsolete with the arrival of Intel's 1103 DRAM (dynamic random access memory) chip in October 1970, and by the time manufacturing bugs were worked out early next year, Intel had a sizeable lead in a dominant and fast growing market -- a lead it benefited from until Japanese memory makers caused a sharp decline in memory prices at the start of the 1980's due to massive infusions of capital into manufacturing capacity.
Intel launched a nationwide marketing campaign inviting magnetic-core memory users to phone Intel collect and have their expenditure on system memory slashed by switching to DRAM. Inevitably, customers would enquire about second source supply of the chips in an era where yields and supply couldn't be taken for granted.
Andy Grove was vehemently opposed to second sourcing, but such was Intel's status as a young company that had to accede to industry demand. Intel chose a Canadian company, Microsystems International Limited as its first second source of chip supply rather than a larger, more experienced company that could dominate Intel with its own product. Intel would gain around $1 million from the license agreement and would gain further when MIL attempted to boost profits by increasing wafer sizes (from two inches to three) and shrinking the chip. MIL's customers turned to Intel as the Canadian firm's chips came off the assembly line defective.
Intel launched a nationwide marketing campaign inviting magnetic-core memory users to phone Intel collect and have their expenditure on system memory slashed by switching to DRAM.
Intel's initial experience wasn't indicative of the industry as a whole, nor its own later issues with second sourcing. AMD's growth was directly aided by becoming a second source for Fairchild's 9300 series TTL (Transistor-Transistor Logic) chips and securing, designing, and delivering a custom chip for Westinghouse's military division that Texas Instruments (the initial contractor) had difficulty producing on time.
Intel's early fabrication failings using the silicon gate process also led to the third and most immediately profitable chip as well as an industry lead in yields. Intel assigned another ex-Fairchild alumnus, the young physicist Dov Frohmann, to investigate the process issues. What Frohmann surmised was that the gates of some of the transistors had become disconnected, floating above and encased within the oxide separating them from their electrodes.
Frohmann also demonstrated to Gordon Moore that these floating gates could hold an electrical charge because of the surrounding insulator (in some cases many decades), and could thus be programmed. In addition, the floating gate electrical charge could be dissipated with ionizing ultra violet radiation which would erase the programming.
Conventional memory required the programming circuits to be laid down during the chip's manufacturer with fuses built into the design for variations in programming. This method is costly on a small scale, needs many different chips to suit individual purposes and requires chip alteration when redesigning or revising the circuits.
The EPROM (Erasable, Programmable Read-Only Memory) revolutionized the technology, making memory programming much more accessible and many times faster since the client did not have to wait for their application specific chips to be manufactured.
The downside of this technology was that in order for the UV light to erase the chip, a relatively expensive quartz window was incorporated in the chip packaging directly above the ROM chip to allow light access. The high cost would later be eased by the introduction of one-time programmable (OTP) EPROMs that did away with the quartz window (and erase function), and electrically erasable, programmable ROMs (EEPROM).
As with the 3101, initial yields were very poor -- less than 1% for the most part. The 1702 EPROM required a precise voltage for memory writes. Variances in manufacturing translated into an inconsistent write voltage requirement -- too little voltage and the programming would be incomplete, too much risked destroying the chip. Joe Friedrich, recently lured away from Philco, and another who had honed their craft at Fairchild, hit upon passing a high negative voltage across the chips before writing data. Friedrich named the process "walking out" and it would increase yields from one chip every two wafers to sixty per wafer.

Intel 1702, the first EPROM chip. (computermuseum.li)
Because the walk out did not physically alter the chip, other manufacturers selling Intel-designed ICs would not immediately discover the reason for Intel's leap in yields. These increased yields directly impacted Intel's fortunes as revenue climbed 600% between 1971 and 1973. The yields, stellar in comparison to the second source companies conferred a marked advantage for Intel over the same parts being sold by AMD, National Semiconductor, Sigtronics, and MIL.
ROM and DRAM were two essential components of a system that would become a milestone in the development of personal computing. In 1969, the Nippon Calculating Machine Corporation (NCM) approached Intel desiring a twelve-chip system for a new desktop calculator. Intel at this stage was in the process of developing its SRAM, DRAM, and EPROM chips and was eager to obtain its initial business contracts.
NCM's original proposal outlined a system requiring eight chips specific to the calculator but Intel's Ted Hoff hit upon the idea of borrowing from the larger minicomputers of the day. Rather than individual chips handling individual tasks, the idea was to make a chip that tackled combined workloads, turning the individual tasks into sub-routines as the larger computers did -- a general-purpose chip. Hoff's idea would reduce the number of chips needed to just four -- a shift register for input-output, a ROM chip, a RAM chip, and the new processor chip.
NCM and Intel signed the contract for the new system on February 6, 1970, and Intel received an advance of $60,000 against a minimum order of 60,000 kits (with eight chips per kit minimum) over three years. The job to bring the processor and its three support chips to fruition would be entrusted to another disaffected Fairchild employee.
Federico Faggin grew disillusioned with both Fairchild's inability to translate its R&D breakthroughs into tangible products before being exploited by rivals and his own continued position as a manufacturing process engineer, while his main interest lay first in chip architecture. Contacting Les Vadász at Intel, he was invited to head a design project with no more foreknowledge than its description as "challenging". Faggin was to find out what the 4-chip MCS-4 project entailed on April 3, 1970, his first day of work, when he was briefed by engineer Stan Mazor. The next day Faggin was thrown into the deep end, meeting with Masatoshi Shima, NCM's representative, who expected to see the logic design of the processor rather than hear an outline from a man who had been on the project less than a day.

Intel 4004, the first commercial microprocessor, had 2300 transistors and ran at a clock speed of 740KHz. (CPU-Zone)
Faggin's team, which now included Shima for the duration of the design phase, quickly set about developing the four chips. Design for the simplest of them, the 4001 was completed in a week with the layout taking a single draftsman a month to complete. By May, the 4002 and 4003 had been designed and work started on the microprocessor 4004. The first pre-production run came off the assembly line in December but because the vital buried contact layer was omitted from fabrication they were rendered non-working. A second revision corrected the mistake and three weeks later all four working chips were ready for the test phase.
The 4004 might have been a footnote in semiconductor history if it had remained a custom part for NCM, but falling prices for consumer electronics, especially in the competitive desktop calculator market, caused NCM to approach Intel and ask for a reduction in unit pricing from the agreed contract. Armed with the knowledge that the 4004 could have many further applications, Bob Noyce proposed a price cut and a refund of NCM's $60,000 advance payment in exchange for Intel being able to market the 4004 to other customers in markets other than calculators. Thus the 4004 became the first commercial microprocessor.
Two other designs of the era were proprietary to whole systems; Garrett AiResearch's MP944 was a component of the Grumman F-14 Tomcat's Central Air Data Computer, which was responsible for optimizing the fighter's variable geometry wings and glove vanes, while Texas Instruments' TMS 0100 and 1000 were initially only available as a component of handheld calculators such as the Bowmar 901B.
The 4004 might have been a footnote in semiconductor history if it had remained a custom part for NCM.
While the 4004 and MP944 required a number of support chips (ROM, RAM, and I/O), the Texas Instruments chip combined these functions into a CPU -- the world's first microcontroller, or "computer-on-a-chip" as it was marketed at the time.
Inside the Intel 4004
Texas Instruments and Intel would enter into a cross-license involving logic, process, microprocessor, and microcontroller IP in 1971 (and again in 1976) that would herald an era of cross-licensing, joint ventures, and the patent as a commercial weapon.
Completion of the NCM (Busicom) MCS-4 system freed up resources for a continuation of a more ambitious project whose origins pre-dated the 4004 design. In late 1969, flush with cash from its initial IPO, Computer Terminal Corporation (CTC, later Datapoint) contacted both Intel and Texas Instruments with a requirement for an 8-bit terminal controller.
Texas Instruments dropped out fairly early, and Intel's 1201 project development, started in March 1970, had stalled by July as project head Hal Feeney was co-opted onto a static RAM chip project. CTC would eventually opt for a then simpler discrete collection of TTL chips as deadlines approached. The 1201 project would languish until interest was shown from Seiko for usage in a desktop calculator and Faggin had the 4004 up and running in January 1971.
In today's environment it seems almost incomprehensible that microprocessors development should play second fiddle to memory, but in the late 1960s and early 1970s computing was the province of mainframes and minicomputers.
In today's environment it seems almost incomprehensible that microprocessor development should play second fiddle to memory, but in the late 1960s and early 1970s computing was the province of mainframes and minicomputers. Less than 20,000 mainframes were sold in the world yearly and IBM dominated this relatively small market (to a lesser extent UNIVAC, GE, NCR, CDC, RCA, Burroughs, and Honeywell -- the "Seven Dwarfs" to IBM's "Snow White"). Meanwhile, Digital Equipment Corporation (DEC) effectively owned the minicomputer market. Intel management and other microprocessor companies, couldn't see their chips usurping the mainframe and minicomputer whereas new memory chips could service these sectors in vast quantities.
The 1201 duly arrived in April 1972 with its name changed to 8008 to indicate that it was a follow on from the 4004. The chip enjoyed reasonable success but was handicapped by its reliance on 18-pin packaging which limited its input-output (I/O) and external bus options. Being relatively slow and still using programming by the first assembly language and machine code, the 8008 was still a far cry from the usability of modern CPUs, although the recent launch and commercialization of IBM's 23FD eight-inch floppy disc would add impetus to the microprocessor market in the next few years.

Intellec 8 development system (computinghistory.org.uk)
Intel's push for wider adoption resulted in the 4004 and 8008 being incorporated in the company's first development systems, the Intellec 4 and Intellec 8 -- the latter of which would figure prominently into the development of the first microprocessor-orientated operating system -- a major “what if” moment in both industries as well as Intel's history. Feedback from users, potential customers, and the growing complexity of calculator-based processors resulted in the 8008 evolving into the 8080, which finally kick-started personal computer development.

The Future of Remote Work, According to Startups

  The Future of Remote Work, According to Startups No matter where in the world you log in from—Silicon Valley, London, and beyond—COVID-19 ...