Sei sulla pagina 1di 4

1. What is meant by the term Moores Law?

Moore's law is the observation that over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The period often quoted as "18 months" is due to Intel executive David House, who predicted that period for a doubling in chip performance (being a combination of the effect of [1]a more transistors and their being faster). The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper. The paper noted that the number of components in integrated circuits had doubled every year from the invention of the [5] integrated circuit in 1958 until 1965 and predicted that the trend would continue "for at least ten years". His prediction has proven to be uncannily accurate, in part because the law is now used in the semiconductor industry to [6] guide long-term planning and to set targets for research and development.
[2][3][4]

source: http://en.wikipedia.org/wiki/Moore's_law

2. In a computer system, what are the main functions of the central processing unit and random access memory?
CPU- is the hardware within a computer system or smartphone which carries out the instructions of acomputer program by performing the basic arithmetical, logical, and input/output operations of the system. The fundamental operation of most CPUs, regardless of the physical form they take, is to execute a sequence of stored instructions called a program. The program is represented by a series of numbers that are kept in some kind of computer memory. There are four steps that nearly all CPUs use in their operation: fetch, decode, execute, and writeback.

CPU understands instructions, that are written in Assembly programming language. CPU can move data from one memory location to another.
Sources: http://en.wikipedia.org/wiki/Central_processing_unit http://silwen.hubpages.com/hub/What-are-the-basic-functions-of-a-CPU

RAM- RAM is an electronic, or volatile, state. When the computer is off, RAM is empty; when it is on, RAM is capable of receiving and holding a copy of the software instructions and data necessary for processing. The Ram is used for the following purposes: Storage of a copy of the main systems program that controls the general operation of the computer. This copy is loaded into RAM when the computer is turn on; it stays there as long as the computer is on. Temporary storage of a copy of application program instructions to be retrieved by the central processing unit (CPU) for interpretation and execution. Temporary storage of data that has been input from the keyboard or other input device until instructions call for the data to be transferred into the CPU for processing. Temporary storage of data that has been produced as a result of processing until instructions call for the data to be used again in subsequent processing or to be transferred to an output device such as the screen, a printer, or a disk storage device.

Source: http://www.blurtit.com/q248576.html

3. What is an information specialist? Give at least four (4) examples


An information technology specialist applies technical expertise to the implementation, monitoring, or maintenance of IT systems. Specialists typically focus on a specific computer network, database, or systems administration function. Specialty areas include network analysis, system administration, security and information assurance, IT audit, database administration, web administration, and more.

Source: http://www.webopedia.com/TERM/I/information_technology_specialist.html 4. Is a computer a conceptual system or a physical system or both? Why?


A computer is both a conceptual system and a physical system. It is a physical system because it uses wires and electrical signals to operate by exploiting the characteristics of semiconductor devices. It is a conceptual system because the complexity of physical reality is abstracted away at a logical level in order for it to do useful things(such as run software). Sources: http://answers.yahoo.com/question/index?qid=20090225030337AADsoV8 5.

Distinguish between data and information.


Data is raw material for data processing. data relates to fact, event and transactions. Data refers to unprocessed information. Data

refers to the lowest abstract or a raw input which when processed or arranged makes meaningful output. It is the group or chunks which represent quantitative and qualitative attributes pertaining to variables. Information is usually the processed outcome of data. More specifically speaking, it is derived from data. Information is a concept and can be used in many domains.
Information is data that has been processed in such a way as to be meaningful to the person who receives it. it is any thing that is communicated. Information

can be a mental stimulus, perception, representation, knowledge, or even an instruction. The examples of data can be facts, analysis, or statistics. In computer terms, symbols, characters, images, or numbers are data. These are the inputs for the system to give a meaningful interpretation. In other words, data in a meaningful form is information.
For example,researchers who conduct market research survey might ask a member of the public to complete questionnaires about a product or a service. These completed questionnaires are data; they are processed and analyze in order to prepare a report on the survey. This resulting report is information.

Sources: http://wiki.answers.com/Q/What_is_the_difference_between_data_and_information_in_comp uter_terms http://www.differencebetween.net/language/difference-between-data-and-information/

6. What are the five (5) main systems that made up the evolution of computer applications? Briefly discuss each.

Mainframe Computers
In the early 1960s, mainframe computers began to find their way into large enterprises. These mainframes consisted of extremely large computers that were responsible for all logic, storage, and processing of data. "Dumbterminals" allowed various users to interact with the mainframe. These systems continued in widespread use for more than 30 years, and to some degree, continue to exist today. Architecturally, these were designed at a time when processing power was scarce and expensive; therefore, it was cost effective to centralize all the power onto the server. The clients for the mainframe systems contained virtually no logic because they relied on the server for everything, including the display logic.

The Age of Microcomputers


As memory and processing power became cheaper, the microcomputer (also known as the personal computer) began to find its way into businesses. Originally, these were used to run stand-alone applications, where everything needed by the application resided directly on the terminal at which the end user worked. These terminals were often easier to use because the user interface had improved. It was during this time that graphical user interfaces (GUIs) became available, further increasing the ease of use of the systems. However, as stand-alone systems, there was still no effective way to centralize data or business rules.

Client/Server Computing
With the migration from mainframe to microcomputer, the pendulum swung from one extreme (having all logic on the server) to the other extreme (having all logic on the client). Sensing the imbalance in this, several vendors began to develop a system that could encapsulate all the benefits of the microcomputer as well as those of the mainframe systems. This led to the birth of client/server applications. Client/server applications were frequently written in languages such as Visual Basic or PowerBuilder, and they offered a lot of flexibility to application developers. Interfaces that were very interactive and intuitive could be created and maintained independent of the logic that drove the application functionality. This separation allowed modifications to be made to the user interface (the place in an application where changes are most frequent), without the need to impact business rules or data access. Additionally, by connecting the client to a remote server, it became possible to build systems in which multiple users could share data and application functionality. With business and data access logic centrally located, any changes to these could be made in a single place. Although traditional client/server applications offered tremendous advantages over stand-alone and mainframe applications, they all lacked a distributed client. This meant that for each change that needed to be made to the user interface, the files comprising the client needed to be reinstalled at each workstation, often requiring dynamic link libraries (DLL) files to be updated. The phrase "DLL hell" aptly captured the frustration of many IT professionals whose job it was to keep the client applications current within a business.

The Internet
During the days of the client/server dominance, the U.S. government project ARPANet was renamed "Internet" and started becoming available to businesses as a means to share files across a distributed network. Most of the early protocols of the Internet, such as File Transfer Protocol (FTP) and Gopher, were specifically related to file

sharing. The Hypertext Transfer Protocol (HTTP) followed these and introduced the concept of "hyperlinking" between networked documents. The Internet, in many ways, is like the mainframe systems that predate it, in that an ultra thin client (the browser) is used to display information retrieved by the server. The documents on the server contain all the information to determine how the page will be displayed in the client. Businesses began to embrace the Internet as a means to share documents, and in time, many realized that the distributed nature of the Internet could free them from the DLL hell of their client/server applications. This newfound freedom led to the introduction of the Internet as more than a document-sharing system and introduced the concept of the web-based application. Of course, these web-based applications lacked the richness and usability that was taken for granted in the client/server days.

Establishing the Need for Rich Internet Applications (RIAs)


Through the transition from client/server applications to web-based applications, businesses were able to save a tremendous amount of money on the costs of desktop support for applications. No longer was it necessary to move from one desk to the next to reinstall the latest version of the application client with each change. Instead, each time the application was used, the latest client logic was downloaded from the server. Of course, within a few years, many businesses realized that there was a downside to this model. Although they were indeed saving money on the distribution costs, they also lost money, largely due to the productivity losses of their employees. The richness of the client in client/server applications allowed end users to achieve their goals quickly and efficiently. However, the page-based nature of web-based applications mandated that for each action they took, the data needed to be sent back to the server, and a new page needed to be retrieved. Although this often was a matter of only seconds per page, over the course of an eight-hour work day, those seconds quickly added up to several minutes per day. Many businesses found that over the course of a work week, employees heavily involved in data entry operations were losing as many as 3-5 hours a week in productive time, as compared to doing the same tasks in their earlier client/server applications. Looking to regain the lost productivity, several variations of rich clients for Internet applications were attempted. One of the early attempts was Java applets, but these often failed because the file size was too large and there were many issues with platform independence. Fortunately, with the release of Macromedia Flash MX in 2002, a new tool to solve the problem was introduced. With Flash as a client, it was again possible to have all the richness and benefits of a traditional client/server application along with the distributed nature of a web-based system. The end result was that the productivity of the client/server days was restored, without the added expense of keeping the user base up-to-date. However, to begin using Flash in this way, Flash developers had to make a logic leap. Traditionally, Flash was used to build stand-alone applications, often in the form of animations or movies. These would most often use local data to run. To fully leverage the benefits of the client/server model, developers needed to understand the benefits of connecting to a server, and the proper delegation between local and remote processing of data.

Source: http://oopas2.uw.hu/ch10lev1sec1.html

Potrebbero piacerti anche