Showing posts with label seminars. Show all posts
Showing posts with label seminars. Show all posts

Saturday, July 10, 2010

HART Communication

For many years, the field communication standard for process automation equipment has been a milliamp (mA) analog current signal. The milliamp current signal varies within a range of 4-2OmA in proportion to the process variable being represented. Li typical applications a signal of 4mA will correspond to the lower limit (0%) of the calibrated range and 2OmA will correspond to the upper limit (100%) of the calibrated range. Virtually all installed systems use this international standard for communicating process variable information between process automation equipment.

HART Field Communications Protocol extends this 4- 2OmA standard to enhance communication with smart field instruments. The HART protocol was designed specifically for use with intelligent measurement and control instruments which traditionally communicate using 4-2OmA analog signals. HART preserves the 4- signal and enables two way digital communications to occur without disturbing the integrity of the 4-2OmA signal. Unlike other digital communication technologies, the HART protocol maintains compatibility with existing 4-2OmA systems, and in doing so, provides users with a uniquely backward compatible solution. HART Communication Protocol is well-established as the existing industry standard for digitally enhanced 4- 2OmA field communication.

THE HART PROTOCOL - AN OVERVIEW

HART is an acronym for "Highway Addressable Remote Transducer". The HART protocol makes use of the Bell 202 Frequency Shift Keying (FSK) standard to superimpose digital communication signals at a low level on top of the 4-2OmA. This enables two-way field communication to take place and makes it possible for additional information beyond just the normal process variable to be communicated to/from a smart field instrument. The HART protocol communicates at 1200 bps without interrupting the 4-2OmA signal and allows a host application (master) to get two or more digital updates per second from a field device. As the digital FSK signal is phase continuous, there is no interference with the 4- 2OrnA signal.

HART is a master/slave protocol which means that a field (slave) device only speaks when spoken to by a master. The HART protocol can be used in various modes for communicating information to/from smart field in3truments and central control or monitor systems. HART provides for up to two masters (primary and secondary). This allows secondary masters such as handheld communicators to be used without interfering with communications to/from the primary master, i.e. control/monitoring system. The most commonly employed HART communication mode is master/slave communication of digital information simultaneous with transmission of the 4-2OmA signal. The HART protocol permits all digital communication with field devices in either point-to-point or multidrop network configuration. There is an optional "burst" communication mode where single slave device can continuously broadcast a standard HART reply message.

HART COMMUNICATION LAYERS

The HART protocol utilizes the OSI reference model. As is the case for most of the communication systems on the field level, the HART protocol implements only the Layers 1, 2 and 7 of the OSI model. The layers 3 to 6 remain empty since their services are either not required or provided by the application layer 7

IBOC Technology

The engineering world has been working on the development and evaluation of IBOC transmission for some time. The NRSC began evaluation proceedings of general DAB systems in 1995. After the proponents merged into one, Ibiquity was left in the running for potential adoption. In the fall of 2001,the NRSC issued a report on Ibiquity's FM IBOC. This comprehensive report runs 62 pages of engineering material plus 13 appendices. All of the system with its blend-to analog operation as signal levels changes. The application of the FM IBOC has been studied by the NRSC and appears to be understood and accepted by radio engineers.

AM IBOC has recently been studied by an NRSC working group as prelude to its adoption for general broadcast use .Its was presented during the NAB convention in April. The FM report covers eight areas of vital performance concerns to the broadcaster and listener alike .If all of these concerns can be met as successfully by AM IBOC, and the receiver manufactures rally to develop and produce the necessary receiving equipment. The evaluated FM concerns were audio quality, service area, acquisition performance, durability, auxiliary data capacity, and behavior as signal degrades, stereo separation and flexibility.

The FM report paid strong attention to the use of SCA services on FM IBOC. About half of all the operating FM stations employ one or more SCAs for reading for the blind or similar services. Before going to the description of FM IBOC system, it is important to discuss the basic principles of digital radio, and IBOC technology. In the foregoing sections we see the above-mentioned topics

2. BASIC PRINCIPLES OF DIGITAL RADIO

WHAT IS DIGITAL RADIO?

Digital radio is a new method of assembling, broadcasting and receiving communications services using the same digital technology now common in many products and services such as computers, compact discs (CDs) and telecommunications.
Digital radio can:
" Provide for better reception of radio services than current amplitude modulation (AM) and frequency modulation (FM) radio broadcasts;
" Deliver higher quality sound than current AM and FM radio broadcasts to fixed, portable and mobile receivers; and
" Carry ancillary services-in the form of audio, images, data and text-providing
" Program information associated with the station and its audio programs (such as station name, song title, artist's name and record label),
" Other information (e.g. Internet downloads, traffic information, news and weather), and
" Other services (e.g. paging and global satellite positioning).

A fundamental difference between analog and digital broadcasting is that digital technology involves the delivery of digital bit streams that can be used not only for sound broadcasting but all manner of multimedia services.

Adaptive Optics in Ground Based Telescopes

Adaptive optics is a new technology which is being used now a days in ground based telescopes to remove atmospheric tremor and thus provide a clearer and brighter view of stars seen through ground based telescopes. Without using this system, the images obtained through telescopes on earth are seen to be blurred, which is caused by the turbulent mixing of air at different temperatures.

Adaptive optics in effect removes this atmospheric tremor. It brings together the latest in computers, material science, electronic detectors, and digital control in a system that warps and bends a mirror in a telescope to counteract, in real time the atmospheric distortion.

The advance promises to let ground based telescopes reach their fundamental limits of resolution and sensitivity, out performing space based telescopes and ushering in a new era in optical astronomy. Finally, with this technology, it will be possible to see gas-giant type planets in nearby solar systems in our Milky Way galaxy. Although about 100 such planets have been discovered in recent years, all were detected through indirect means, such as the gravitational effects on their parent stars, and none has actually been detected directly.

WHAT IS ADAPTIVE OPTICS ?

Adaptive optics refers to optical systems which adapt to compensate for optical effects introduced by the medium between the object and its image. In theory a telescope's resolving power is directly proportional to the diameter of its primary light gathering lens or mirror. But in practice , images from large telescopes are blurred to a resolution no better than would be seen through a 20 cm aperture with no atmospheric blurring. At scientifically important infrared wavelengths, atmospheric turbulence degrades resolution by at least a factor of 10.

Space telescopes avoid problems with the atmosphere, but they are enormously expensive and the limit on aperture size of telescopes is quite restrictive. The Hubble Space telescope, the world's largest telescope in orbit , has an aperture of only 2.4 metres, while terrestrial telescopes can have a diameter four times that size.

In order to avoid atmospheric aberration, one can turn to larger telescopes on the ground, which have been equipped with ADAPTIVE OPTICS system. With this setup, the image quality that can be recovered is close to that the telescope would deliver if it were in space. Images obtained from the adaptive optics system on the 6.5 m diameter telescope, called the MMT telescope illustrate the impact.

A 64 Point Fourier Transform Chip

Fourth generation wireless and mobile system are currently the focus of research and development. Broadband wireless system based on orthogonal frequency division multiplexing will allow packet based high data rate communication suitable for video transmission and mobile internet application. Considering this fact we proposed a data path architecture using dedicated hardwire for the baseband processor. The most computationally intensive part of such a high data rate system are the 64-point inverse FFT in the transmit direction and the viterbi decoder in the receiver direction. Accordingly an appropriate design methodology for constructing them has to be chosen a) how much silicon area is needed b) how easily the particular architecture can be made flat for implementation in VLSI c) in actual implementation how many wire crossings and how many long wires carrying signals to remote parts of the design are necessary d) how small the power consumption can be .This paper describes a novel 64-point FFT/IFFT processor which has been developed as part of a large research project to develop a single chip wireless modem.

ALGORITHM FORMULATION

The discrete fourier transformation A(r) of a complex data sequence B(k) of length N
where r, k ={0,1……, N-1} can be described as


Where WN = e-2?j/N . Let us consider that N=MT , ? = s+ Tt and k=l+Mm,where s,l ? {0,1…..7} and m, t ? {0,1,….T-1}. Applying these values in first equation and we get


This shows that it is possible to realize the FFT of length N by first decomposing it to one M and one T-point FFT where N = MT, and combinig them. But this results in in a two dimensional instead of one dimensional structure of FFT. We can formulate 64-point by considering M =T = 8



This shows that it is possible to express the 64-point FFT in terms of a two dimensional structure of 8-point FFTs plus 64 complex inter-dimensional constant multiplications. At first, appropriate data samples undergo an 8-point FFT computation. However, the number of non-trivial multiplications required for each set of 8-point FFT gets multiplied with 1. Eight such computations are needed to generate a full set of 64 intermediate data, which once again undergo a second 8-point FFT operation . Like first 8-point FFT for second 8-point again such computions are required. Proper reshuffling of the data coming out from the second 8-point FFT generates the final output of the 64-point FFT .

Fig. Signal flow graph of an 8-point DIT FFT.

For realization of 8-point FFT using the conventional DIT does not need to use any multiplication operation.

The constants to be multiplied for the first two columns of the 8-point FFT structure are either 1 or j . In the third column, the multiplications of the constants are actually addition/subtraction operation followed multiplication of 1/?2 which can be easily realized by using only a hardwired shift-and-add operation. Thus an 8-point FFT can be carried out without using any true digital multiplier and thus provide a way to realize a low- power 64-point FFT at reduced hardware cost. Since a basic 8-point FFT does not need a true multiplier. On the other hand, the number of non-trivial complex multiplications for the conventional 64-point radix-2 DIT FFT is 66. Thus the present approach results in a reduction of about 26% for complex multiplication compared to that required in the conventional radix-2 64-point FFT. This reduction of arithmetic complexity furthur enhances the scope for realizing a low-power 64-point FFT processor. However, the arithmetic complexity of the proposed scheme is almost the same to that of radix-4 FFT algorithm since the radix-4 64-point FFT algorithm needs 52 non-trivial complex multiplications.

Chip Morphing

1.1. The Energy Performance Tradeoff

Engineering is a study of tradeoffs. In computer engineering the tradeoff has traditionally been between performance, measured in instructions per second, and price. Because of fabrication technology, price is closely related to chip size and transistor count. With the emergence of embedded systems, a new tradeoff has become the focus of design. This new tradeoff is between performance and power or energy consumption. The computational requirements of early embedded systems were generally more modest, and so the performance-power tradeoff tended to be weighted towards power. "High performance" and "energy efficient" were generally opposing concepts.

However, new classes of embedded applications are emerging which not only have significant energy constraints, but also require considerable computational resources. Devices such as space rovers, cell phones, automotive control systems, and portable consumer electronics all require or can benefit from high-performance processors. The future generations of such devices should continue this trend.

Processors for these devices must be able to deliver high performance with low energy dissipation. Additionally, these devices evidence large fluctuations in their performance requirements. Often a device will have very low performance demands for the bulk of its operation, but will experience periodic or asynchronous "spikes" when high-performance is needed to meet a deadline or handle some interrupt event. These devices not only require a fundamental improvement in the performance power tradeoff, but also necessitate a processor which can dynamically adjust its performance and power characteristics to provide the tradeoff which best fits the system requirements at that time.

1.2. Fast, Powerful but Cheap, and Lots of Control

These motivations point to three major objectives for a power conscious embedded processor. Such a processor must be capable of high performance, must consume low amounts of power, and must be able to adapt to changing performance and power requirements at runtime.

The objective of this seminar is to define a micro-architecture which can exhibit low power consumption without sacrificing high performance. This will require a fundamental shift to the power-performance curve presented by traditional microprocessors. Additionally, the processor design must be flexible and reconfigurable at run-time so that it may present a series of configurations corresponding to different tradeoffs between performance and power consumption.

1.3. MORPH

These objectives and motivations were identified during the MORPH project, a part of the Power Aware Computing / Communication (PACC) initiative. In addition to exploring several mechanisms to fundamentally improve performance, the MORPH project brought forth the idea of "gear shifting" as an analogy for run-time reconfiguration. Realizing that real world applications vary their performance requirements dramatically over time, a major goal of the project was to design microarchitectures which could adjust to provide the minimal required performance at the lowest energy cost. The MORPH project explored a number of microarchitectural techniques to achieve this goal, such as morphable cache hierarchies and exploiting bit-slice inactivity. One technique, multi-cluster architectures, is the direct predecessor of this work. In addition to microarchitectural changes, MORPH also conducted a survey of realistic embedded applications which may be power constrained. Also, design implications of a power aware runtime system were explored.

Powered by Blogger.

Followers