Monday, December 13, 2010

About Analogue Computers

An analog computer is one which can perform multiple calculations at once and can cope with infinite fractions of numbers. The term analog does not relate to how the computer is powered and it is possible to have electronic analog computers. The characteristics of an analog computer mean it can be better than a digital computer at particular tasks.
A computer is simply a machine which processes data in a set fashion or, to put it another way, calculates. Today most computers are digital and work by reducing all data to binary numbers before processing. Analog computers go back thousands of years but vary from digital computers in only two fundamental ways.

Computer types

Since the invention of computers from first generation and fourth generation computers, they have been classified according to their types and how they operate that is input, process and output information. Below you will get a brief discussion on various types of Computers we have
Computer types can be divided into 3 categories according to electronic nature. Types of computers are classified according to how a particular Computer functions. These computer types are
· Analogue Computers
· Digital Computers
· Hybrid Computers
Analogue Computers
Analogue types of Computer uses what is known as analogue signals that are represented by a continuous set of varying voltages and are used in scientific research centers?, hospitals and flight centers
With analogue types of computer no values are represented by physical measurable quantities e.g. voltages. Analogue computer types program arithmetic and logical operations by measuring physical changes i.e. temperatures or pressure.
Digital Computer type
With these types of computers operation are on electrical input that can attain two inputs, states of ON=1 and state of OFF = 0. With digital type of computers data is represented by digital of 0 and 1 or off state and on state. Digital computer type recognizes data by counting discrete signal of (0 0r 1), they are high speed programmable; they compute values and stores results. After looking at the Digital computer type and how it functions will move to the third computer type as mentioned above.
Hybrid type of Computer
Hybrid computer types are very unique, in the sense that they combined both analogue and digital features and operations. With Hybrid computers operate by using digital to analogue convertor and analogue to digital convertor. By linking the two types of computer above you come up with this new computer type called Hybrid.
I hope this article on computer types gives you a basic foundation of how computers are classified and how they operate. Next article will focuses on computer sizes definition and characteristics,

Wednesday, December 1, 2010

An overview and Brief of the C++ Programming.

1 Introduction and Overview


The C++ programming language provides a model of memory and computation that closely matches that of
most computers. In addition, it provides powerful and flexible mechanisms for abstraction; that is, language
constructs that allow the programmer to introduce and use new types of objects that match the concepts
of an application. Thus, C++ supports styles of programming that rely on fairly direct manipulation
of hardware resources to deliver a high degree of efficiency plus higher-level styles of programming that
rely on user-defined types to provide a model of data and computation that is closer to a human’s view of
the task being performed by a computer. These higher-level styles of programming are often called data
abstraction, object-oriented programming, and generic programming.

This paper is organized around the main programming styles directly supported by C++:
§2 The Design and Evolution of C++ describes the aims of C++ and the principles that guided its evolution.
§3 The C Programming Model presents the C subset of C++ and other C++ facilities supporting traditional systems-programming styles.
§4 The C++ Abstraction Mechanisms introduces C++’s class concept and its use for defining new types
that can be used exactly as built-in types, shows how abstract classes can be used to provide interfaces
to objects of a variety of types, describes the use of class hierarchies in object-oriented programming,
and presents templates in support of generic programming.
§5 Large-Scale Programming describes namespaces and exception handling provided to ease the composition of programs out of separate parts.
§6 The C++ Standard Library presents standard facilities such as I/O streams, strings, containers (e.g.
v e c t o r , l i s t , and m a p ), generic algorithms (e.g. s o r t (), f i n d (), f o r _ e a c h ()) and support for numeric computation.
To round off, a brief overview of some of the tasks that C++ has been used for and some suggestions for
further reading are given.

An Introduction to C++ Programming for the knowledge

Introduction

C++ is a programming language substantially different from C. Many see C++ as "a better C than C," or as C with some add-ons. I believe that to be wrong, and I intend to teach C++ in a way that makes use of what the language can offer. C++ shares the same low level constructs as C, however, and I will assume some knowledge of C in this course. You might want to have a look at the C introduction course to get up to speed on that language.

Basic I/O

All intro courses in programming begin with a "Hello World" program [except those that don't -- Ed], and so does this one.
#include <iostream.h>

  int main(void)
  {
    cout << "Hello EDM/2" << endl;
    return 0;
  }
Line 1 includes the header <iostream.h>, which is needed for the input/output operations. In C++ writing something on standard output is done by:
cout << whatever;
Here "whatever" can be anything that is printable; a string literal as "Hello EDM/2", a variable, an expression. If you want to print several things, you can do so at the same time with:
cout << expr1 << expr2 << expr3 << ...;
Again, expr1, expr2 and expr3 represents things that are printable.
In the "Hello EDM/2" program above, the last expression printed is "endl" which is a special expression (called a manipulator. Manipulators, and details about I/O, will be covered in a complete part sometime this fall) which moves the cursor to the next line. It's legal to cascade more expressions to print after "endl," and doing so means those values will be printed on the next line.

C++ Program with QT 3

The Qt toolkit is a C++ class library and a set of tools for building multiplatform
GUI programs using a “write once, compile anywhere” approach. Qt lets
programmers use a single source tree for applications that will run on Windows
95 to XP, Mac OS X, Linux, Solaris, HP-UX, and many other versions of
Unix with X11.A version of Qt is also available for Embedded Linux, with the
same API.

The purpose of this book is to teach you how to write GUI programs using Qt 3.
The book starts with “Hello Qt” and quickly moves on to more advanced topics,
such as creating custom widgets and providing drag and drop. The text is
complemented by a CDthat contains the source code of the example programs.
The CD also provides Qt and Borland C++ for Windows, Qt for Unix, and Qt
for Mac OS X. Appendix A explains how to install the software.

The book focuses on explaining good idiomatic Qt 3 programming techniques
rather than simply rehashing or summarizing Qt’s extensive online documentation.
And because we are involved in the development of Qt 4, we have tried
to ensure that most of what we teach here will still be valid and sensible for
Qt 4.

It is assumed that you have a basic knowledge of C++. The code examples use
a subset of C++, avoiding many C++ features that are rarely needed when
programming Qt. In the few places where a more advanced C++ construct is
unavoidable, it is explained as it is used.

Qt made its reputation as a multiplatform toolkit, but because of its intuitive
and powerful API, many organizations use Qt for single-platform development.
Adobe PhotoshopAlbum is just one example of a mass-marketWindows
application written in Qt. Many sophisticated software systems in vertical
markets, such as 3D animation tools, digital film processing, electronic design
automation (for chip design), oil and gas exploration, financial services, and
medical imaging, are built with Qt. If you are making a living with a successful
Windows product written in Qt, you can easily create new markets in the
Mac OS X and Linux worlds simply by recompiling.

Qt is available under various licenses. If you want to build commercial
applications, you must buy a commercial license; if you want to build open
source programs,you can use a non-commercial Qt edition. (The editions of Qt
on the CD are non-commercial.) Qt is the foundation on which the K Desktop
Environment (KDE) and the many open source applications that go with it
are built.

In addition to Qt’s hundreds of classes, there are add-ons that extend Qt’s
scope and power. Some of these products, like the Qt/Motif integration module
and Qt Script for Applications (QSA), are supplied by Trolltech, while others
are provided by companies and by the open source community. See http://
www.trolltech.com/products/3rdparty/ for information on Qt add-ons. Qt also
has a well-established and thriving user community that uses the qt-interest
mailing list; see http://lists.trolltech.com/ for details.

The book is divided into two parts. Part I covers all the concepts and practices
necessary for programming GUI applications using Qt. Knowledge of this
part alone is sufficient to write useful GUI applications. Part II covers central
Qt topics in more depth and provides more specialized and advanced material.
The chapters of Part II can be read in any order, but they assume familiarity
with the contents of Part I.

Difrent kinds of language in computer.

C++ (pronounced "see plus plus") is a general-purpose, high-level programming language with low-level facilities. It is a statically typed free-form multi-paradigm language supporting procedural programming, data abstraction, object-oriented programming, generic programming and RTTI. Since the 1990s, C++ has been one of the most popular commercial programming languages.

Bjarne Stroustrup developed C++ (originally named "C with Classes") in 1983 at Bell Labs as an enhancement to the C programming language. Enhancements started with the addition of classes, followed by, among other features, virtual functions, operator overloading, multiple inheritance, templates, and exception handling. The C++ programming language standard was ratified in 1998 as ISO/IEC 14882:1998, the current version of which is the 2003 version, ISO/IEC 14882:2003. A new version of the standard (known informally as C++0x) is being developed.

Stroustrup began work on C with Classes in 1979. The idea of creating a new language originated from Stroustrup's experience in programming for his Ph.D. thesis. Stroustrup found that Simula had features that were very helpful for large software development, but the language was too slow for practical use, while BCPL was fast but too low-level and unsuitable for large software development. When Stroustrup started working in Bell Labs, he had the problem of analyzing the UNIX kernel with respect to distributed computing. Remembering his Ph.D. experience, Stroustrup set out to enhance the C language with Simula-like features. C was chosen because it is general-purpose, fast, and portable. Besides C and Simula, some other languages which inspired him were ALGOL 68, Ada, CLU and ML. At first, the class, derived class, strong type checking, inlining, and default argument features were added to C via Cfront. The first commercial release occurred in October 1985.[1]

In 1983, the name of the language was changed from C with Classes to C++. New features were added including virtual functions, function name and operator overloading, references, constants, user-controlled free-store memory control, improved type checking, and a new single-line comment style with two forward slashes (//). In 1985, the first edition of The C++ Programming Language was released, providing an important reference to the language, as there was not yet an official standard. In 1989, Release 2.0 of C++ was released. New features included multiple inheritance, abstract classes, static member functions, const member functions, and protected members. In 1990, The Annotated C++ Reference Manual was published. This work became the basis for the future standard. Late addition of features included templates, exceptions, namespaces, new casts, and a Boolean type.

As the C++ language evolved, a standard library also evolved with it. The first addition to the C++ standard library was the stream I/O library which provided facilities to replace the traditional C functions such as printf and scanf. Later, among the most significant additions to the standard library, was the Standard Template Library.

After years of work, a joint ANSI-ISO committee standardized C++ in 1998 (ISO/IEC 14882:1998). For some years after the official release of the standard in 1998, the committee processed defect reports, and published a corrected version of the C++ standard in 2003. In 2005, a technical report, called the "Library Technical Report 1" (often known as TR1 for short) was released. While not an official part of the standard, it gives a number of extensions to the standard library which are expected to be included in the next version of C++. Support for TR1 is growing in almost all currently maintained C++ compilers.

Tuesday, November 30, 2010

Fifth Generation - Present and Beyond: Artificial Intelligence
Fifth generation computing devices, based on artificial intelligence, are still in development, though there are some applications, such as voice recognition, that are being used today.
Artificial Intelligence is the branch of computer science concerned with making computers behave like humans. The term was coined in 1956 by John McCarthy at the Massachusetts Institute of Technology. Artificial intelligence includes:
·         Games Playing: programming computers to play games such as chess and checkers
·         Expert Systems: programming computers to make decisions in real-life situations (for example, some expert systems help doctors diagnose diseases based on symptoms)
·         Natural Language: programming computers to understand natural human languages
·         Neural Networks: Systems that simulate intelligence by attempting to reproduce the types of physical connections that occur in animal brains
·         Robotics: programming computers to see and hear and react to other sensory stimuli
Currently, no computers exhibit full artificial intelligence (that is, are able to simulate human behavior). The greatest advances have occurred in the field of games playing. The best computer chess programs are now capable of beating humans. In May, 1997, an IBM super-computer called Deep Blue defeated world chess champion Gary Kasparov in a chess match.
In the area of robotics, computers are now widely used in assembly plants, but they are capable only of very limited tasks. Robots have great difficulty identifying objects based on appearance or feel, and they still move and handle objects clumsily.
Natural-language processing offers the greatest potential rewards because it would allow people to interact with computers without needing any specialized knowledge. You could simply walk up to a computer and talk to it. Unfortunately, programming computers to understand natural languages has proved to be more difficult than originally thought. Some rudimentary translation systems that translate from one human language to another are in existence, but they are not nearly as good as human translators.
There are also voice recognition systems that can convert spoken sounds into written words, but they do not understand what they are writing; they simply take dictation. Even these systems are quite limited -- you must speak slowly and distinctly.
In the early 1980s, expert systems were believed to represent the future of artificial intelligence and of computers in general. To date, however, they have not lived up to expectations. Many expert systems help human experts in such fields as medicine and engineering, but they are very expensive to produce and are helpful only in special situations.
Today, the hottest area of artificial intelligence is neural networks, which are proving successful in a number of disciplines such as voice recognition and natural-language processing.
There are several programming languages that are known as AI languages because they are used almost exclusively for AI applications. The two most common are LISP and Prolog.

Fourth Generation - 1971-Present: Microprocessors
The microprocessor brought the fourth generation of computers, as thousands of integrated circuits we rebuilt onto a single silicon chip. A silicon chip that contains a CPU. In the world of personal computers, the terms microprocessor and CPU are used interchangeably. At the heart of all personal computers and most workstations sits a microprocessor. Microprocessors also control the logic of almost all digital devices, from clock radios to fuel-injection systems for automobiles.
Three basic characteristics differentiate microprocessors:
§  Instruction Set: The set of instructions that the microprocessor can execute.
§  Bandwidth: The number of bits processed in a single instruction.
§  Clock Speed: Given in megahertz (MHz), the clock speed determines how many instructions per second the processor can execute.
In both cases, the higher the value, the more powerful the CPU. For example, a 32-bit microprocessor that runs at 50MHz is more powerful than a 16-bitmicroprocessor that runs at 25MHz.
What in the first generation filled an entire room could now fit in the palm of the hand. The Intel 4004chip, developed in 1971, located all the components of the computer - from the central processing unit and memory to input/output controls - on a single chip.
Abbreviation of central processing unit, and pronounced as separate letters. The CPU is the brains of the computer. Sometimes referred to simply as the processor or central processor, the CPU is where most calculations take place. In terms of computing power, the CPU is the most important element of a computer system.
On large machines, CPUs require one or more printed circuit boards. On personal computers and small workstations, the CPU is housed in a single chip called a microprocessor.
Two typical components of a CPU are:
·         The arithmetic logic unit (ALU), which performs arithmetic and logical operations.
·         The control unit, which extracts instructions from memory and decodes and executes them, calling on the ALU when necessary.
In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas of life as more and more everyday products began to use microprocessors.
As these small computers became more powerful, they could be linked together to form networks, which eventually led to the development of the Internet. Fourth generation computers also saw the development of GUI's, the mouse and handheld devices

Third Generation - 1964-1971: Integrated Circuits
Developer than second generation computer of the nation.
The development of the integrated circuit was the hallmark of the third generation of computers. Transistors were miniaturized and placed on silicon chips, called semiconductors, which drastically increased the speed and efficiency of computers.
A nonmetallic chemical element in the carbon family of elements. Silicon - atomic symbol "Si" - is the second most abundant element in the earth's crust, surpassed only by oxygen. Silicon does not occur uncombined in nature. Sand and almost all rocks contain silicon combined with oxygen, forming silica. When silicon combines with other elements, such as iron, aluminum or potassium, a silicate is formed. Compounds of silicon also occur in the atmosphere, natural waters, and many plants and in the bodies of some animals.
Silicon is the basic material used to make computer chips, transistors, silicon diodes and other electronic circuits and switching devices because its atomic structure makes the element an ideal semiconductor. Silicon is commonly doped, or mixed, with other elements, such as boron, phosphorous and arsenic, to alter its conductive properties.
A chip is a small piece of semi conducting material (usually silicon) on which an integrated circuit is embedded. A typical chip is less than ¼-square inches and can contain millions of electronic components(transistors). Computers consist of many chips placed on electronic boards called printed circuit boards. There are different types of chips. For example, CPU chips (also called microprocessors) contain an entire processing unit, whereas memory chips contain blank memory.
Semiconductor is a material that is neither a good conductor of electricity (like copper) nor a good insulator (like rubber). The most common semiconductor materials are silicon and germanium. These materials are then doped to create an excess or lack of electrons.
Computer chips, both for CPU and memory, are composed of semiconductor materials. Semiconductors make it possible to miniaturize electronic components, such as transistors. Not only does miniaturization mean that the components take up less space, it also means that they are faster and require less energy.

Monday, November 29, 2010

Generation of Computer

Second Generation - 1956-1963: Transistors
Transistors replaced vacuum tubes and ushered in the second generation computer. Transistor is a device composed of semiconductor material that amplifies a signal or opens or closes a circuit. Invented in 1947 at Bell Labs, transistors have become the key ingredient of all digital circuits, including computers. Today's latest microprocessor contains tens of millions of microscopic transistors.
Prior to the invention of transistors, digital circuits were composed of vacuum tubes, which had many disadvantages. They were much larger, required more energy, dissipated more heat, and were more prone to failures. It's safe to say that without the invention of transistors, computing as we know it today would not be possible.
The transistor was invented in 1947 but did not see widespread use in computers until the late 50s. The transistor was far superior to the vacuum tube,allowing computers to become smaller, faster, cheaper,more energy-efficient and more reliable than their first-generation predecessors. Though the transistor still generated a great deal of heat that subjected the computer to damage, it was a vast improvement over the vacuum tube. Second-generation computers still relied on punched cards for input and printouts for output.
Second-generation computers moved from cryptic binary machine language to symbolic, or assembly, languages,which allowed programmers to specify instructions in words. High-level programming languages were also being developed at this time, such as early versions of COBOL and FORTRAN. These were also the first computers that stored their instructions in their memory, which moved from a magnetic drum to magnetic core technology.
The first computers of this generation were developed for the atomic energy industry.

Generation of computer

First Generation - 1940-1956: Vacuum Tubes
The first computers used vacuum tubes for circuitry and magnetic drums for memory, and were often enormous, taking up entire rooms. A magnetic drum,also referred to as drum, is a metal cylinder coated with magnetic iron-oxide material on which data and programs can be stored. Magnetic drums were once use das a primary storage device but have since been implemented as auxiliary storage devices.
The tracks on a magnetic drum are assigned to channels located around the circumference of the drum, forming adjacent circular bands that wind around the drum. A single drum can have up to 200 tracks. As the drum rotates at a speed of up to 3,000 rpm, the device's read/write heads deposit magnetized spots on the drum during the write operation and sense these spots during a read operation. This action is similar to that of a magnetic tape or disk drive.
They were very expensive to operate and in addition to using a great deal of electricity, generated a lot of heat, which was often the cause of malfunctions. First generation computers relied on machine language to perform operations, and they could only solve one problem at a time. Machine languages are the only languages understood by computers. While easily understood by computers, machine languages are almost impossible for humans to use because they consist entirely of numbers. Computer Programmers, therefore, use either high level programming languages or an assembly language programming. An assembly language contains the same instructions as a machine language, but the instructions and variables have names instead of being just numbers.
Programs written in  high level programming languages retranslated into assembly language or machine language by a compiler. Assembly language program retranslated into machine language by a program called an assembler (assembly language compiler).
Every CPU has its own unique machine language. Programs must be rewritten or recompiled, therefore, to run on different types of computers. Input was based onpunch card and paper tapes, and output was displayed on printouts.
The UNIVAC and ENIAC computers are examples of first-generation computing devices. The UNIVAC was the first commercial computer delivered to a business client, the U.S. Census Bureau in 1951.
Acronym for Electronic Numerical Integrator And Computer, the world's first operational electronic digital computer, developed by Army Ordnance to compute World War II ballistic firing tables. The ENIAC, weighing 30 tons, using 200 kilowatts of electric power and consisting of 18,000 vacuum tubes,1,500 relays, and hundreds of thousands of resistors,capacitors, and inductors, was completed in 1945. In addition to ballistics, the ENIAC's field of application included weather prediction, atomic-energy calculations, cosmic-ray studies, thermal ignition,random-number studies, wind-tunnel design, and other scientific uses. The ENIAC soon became obsolete as the need arose for faster computing speeds.

History Of Computer

The history of computer science began long before the modern discipline of Computer Science that emerged in the twentieth century, and hinted at in the centuries prior. The progression, from mechanical inventions and mathematical theories towards the modern concepts and machines, formed a major academic field and the basis of a massive worldwide industry.
Generation of Computer.
The history of computer development is often referred to in reference to the different generations of computing devices. A generation refers to the state of improvement in the product development process. This term is also used in the different advancements of new computer technology. With each new generation, the circuitry has gotten smaller and more advanced than the previous generation before it. As a result of the miniaturization, speed, power, and computer memory has proportionally increased. New discoveries are constantly being developed that affect the way we live, work and play.
Each generation of computers is characterized by major technological development that fundamentally changed the way computers operate, resulting in increasingly smaller, cheaper, more powerful and more efficient and reliable devices. Read about each generation and the developments that led to the current devices that we use today.