CLICK ON EITHER SEMINARS OR PROJECTS TO ACCESS IT.



SEMINARS

Holography

1. Introduction :

With its omnipresent computers, all connected via the Internet, the Information Age has led to an explosion of information available to users. The decreasing cost of storing data, and the increasing storage capacities of the same small device footprint, have been key enablers of this revolution. While current storage needs are being met, storage technologies must continue to improve in order to keep pace with the rapidly increasing demand.

However, both magnetic and conventional optical data storage technologies, where individual bits are stored as distinct magnetic or optical changes on the surface of a recording medium, are approaching physical limits beyond which individual bits may be too small or too difficult to store. Storing information throughout the volume of a medium—not just on its surface—offers an intriguing high-capacity alternative. Holographic data storage is a volumetric approach, which, although conceived decades ago, has made recent progress toward practicality with the appearance of lower-cost enabling technologies, significant results from longstanding research efforts, and progress in holographic recording materials.

In holographic data storage, an entire page of information is stored at once as an optical interference pattern within a thick, photosensitive optical material (Figure 1). Intersecting two coherent laser beams within the storage material does this. The first, called the object beam, contains the information to be stored; the second, called the reference beam, is designed to be simple to reproduce—for example, a simple collimated beam with a planar wave front. The resulting optical interference pattern causes chemical and/or physical changes in the photosensitive medium: A replica of the interference pattern is stored as a change in the absorption, refractive index, or thickness of the photosensitive medium. When the stored interference grating is illuminated with one of the two waves that was used during recording [Figure 2], some of this incident light is diffracted by the stored grating in such a fashion that the other wave is reconstructed. Illuminating the stored grating with the reference wave reconstructs the object wave, and vice versa [Figure 2]. Interestingly, a backward-propagating or phase-conjugate reference wave, illuminating the stored grating from the “back” side, reconstructs an object wave that also propagates backward toward its original source [Figure 2].


A large number of these interference gratings or patterns can be superimposed in the same thick piece of media and can be accessed independently, as long as they are distinguishable by the direction or the spacing of the gratings. Such separation can be accomplished by changing the angle between the object and reference wave or by changing the laser wavelength. Any particular data page can then be read out independently by illuminating the stored gratings with the reference wave that was used to store that page. Because of the thickness of the hologram, this reference wave is diffracted by the interference patterns in such a fashion that only the desired object beam is significantly reconstructed and imaged on an electronic camera. The theoretical limits for the storage density of this technique are around tens of terabits per cubic centimeter.

In addition to high storage density, holographic data storage promises fast access times, because the laser beams can be moved rapidly without inertia, unlike the actuators in disk drives. With the inherent parallelism of its page wise storage and retrieval, having a large number of relatively slows, and therefore low-cost, parallel channels can reach a very large compound data rate.

The data to be stored are imprinted onto the object beam with a pixilated input device called a spatial light modulator (SLM); typically, this is a liquid crystal panel similar to those on laptop computers or in modern camcorder viewfinders. To retrieve data without error, the object beam must contain a high-quality imaging system—one capable of directing this complex optical wave front through the recording medium, where the wave front is stored and then later retrieved, and then onto a pixilated camera chip (Figure 3).

The image of the data page at the camera must be as close as possible to perfect. Any optical aberrations in the imaging system or mis-focus of the detector array would spread energy from one pixel to its neighbors. Optical distortions (where pixels on a square grid at the SLM are not imaged to a square grid) or errors in magnification will move a pixel of the image off its intended receiver, and either of these problems (blur or shift) will introduce errors in the retrieved data. To avoid having the imaging system dominate the overall system performance, near-perfect optics would appear to be unavoidable, which of course would be expensive. However, the above-mentioned readout of phase-conjugated holograms provides a partial solution to this problem. Here the reconstructed data page propagates backward through the same optics that was used during the recording, which compensates for most shortcomings of the imaging system. However, the detector and the spatial light modulator must still be properly aligned.


A rather unique feature of holographic data storage is associative retrieval: Imprinting a partial or search data pattern on the object beam and illuminating the stored holograms reconstructs all of the reference beams that were used to store data. The intensity that is diffracted by each of the stored interference gratings into the corresponding reconstructed reference beam is proportional to the similarity between the search pattern and the content of that particular data page. By determining, for example, which reference beam has the highest intensity and then reading the corresponding data page with this reference beam, the closest match to the search pattern can be found without initially knowing its address.

Because of all of these advantages and capabilities, holographic storage has provided an intriguing alternative to conventional data storage techniques for three decades. However, it is the recent availability of relatively low-cost components, such as liquid crystal displays for SLMs and solid-state camera chips from video camcorders for detector arrays, which has led to the current interest in creating practical holographic storage devices. Recent reviews of holographic storage can be found in. A team of scientists from the IBM Research Division have been involved in exploring holographic data storage, partially as a partner in the DARPA-initiated consortia on holographic data storage systems (HDSS) and on photo refractive information storage materials (PRISM). In this paper, we describe the current status of our effort.

The overall theme of our research is the evaluation of the engineering tradeoffs between the performance specifications of a practical system, as affected by the fundamental material, device, and optical physics. Desirable performance specifications include data fidelity as quantified by bit-error rate (BER), total system capacity, storage density, readout rate, and the lifetime of stored data. This paper begins by describing the hardware aspects of holographic storage, including the test platforms we have built to evaluate materials and systems tradeoffs experimentally, and the hardware innovations developed during this process. Phase-conjugate readout, which eases the demands on both hardware design and material quality, is experimentally demonstrated. The second section of the paper describes our work in coding and signal processing, including modulation codes, novel preprocessing techniques, the storage of more than one bit per pixel, and techniques for quantifying coding tradeoffs. Then we discuss associative retrieval, which introduces parallel search capabilities offered by no other storage technology. The fourth section describes our work in testing and evaluating materials, including permanent or write-once read-many-times (WORM) materials, read write materials, and photon-gated storage materials offering reversible storage without sacrificing the lifetime of stored data. The paper concludes with a discussion of applications for holographic data storage.

2. Hardware for holography data storage:

Figure 3 shows the most important hardware components in a holographic storage system: the SLM used to imprint data on the object beam, two lenses for imaging the data onto a matched detector array, a storage material for recording volume holograms, and a reference beam intersecting the object beam in the material. What is not shown in Figure 3 is the laser source, beam-forming optics for collimating the laser beam, beam splitters for dividing the laser beam into two parts, stages for aligning the SLM and detector array, shutters for blocking the two beams when needed, and wave plates for controlling polarization. Assuming that holograms will be angle-multiplexed (superimposed yet accessed independently within the same volume by changing the incidence angle of the reference beam), a beam-steering system directs the reference beam to the storage material. Wavelength multiplexing has some advantages over angle multiplexing, but the fast tunable laser sources at visible wavelengths that would be needed do not yet exist.

The optical system shown in Figure 3, with two lenses separated by the sum of their focal lengths, is called the “4-f” configuration, since the SLM and detector array turn out to be four focal lengths apart. Other imaging systems such as the Fresnel configuration (where a single lens satisfies the imaging condition between SLM and detector array) can also be used, but the 4-f system allows the high numerical apertures (large ray angles) needed for high density. In addition, since each lens takes a spatial Fourier transform in two dimensions, the hologram stores the Fourier transform of the SLM data, which is then Fourier-transformed again upon readout by the second lens. This has several advantages: Point defects on the storage material do not lead to lost bits, but result in a slight loss in signal-to-noise ratio at all pixels; and the storage material can be removed and replaced in an offset position, yet the data can still be reconstructed correctly. In addition, the Fourier transform properties of the 4-f system lead to the parallel optical search capabilities offered by holographic associative retrieval. The disadvantages of the Fourier transform geometry come from the uneven distribution of intensity in the shared focal plane of the two lenses.


3. Coding and signal processing :

In a data-storage system, the goal of coding and signal processing is to reduce the BER to a sufficiently low level while achieving such important figures of merit as high density and high data rate. This is accomplished by stressing the physical components of the system well beyond the point, at which the channel is error-free, and then introducing coding and signal processing schemes to reduce the BER to levels acceptable to users. Although the system retrieves raw data from the storage device with many errors (a high raw BER), the coding and signal processing ensures that the user data are delivered with an acceptably low level of error (a low user BER).

Coding and signal processing can involve several qualitatively distinct elements. The cycle of user data from input to output can include interleaving, error-correction-code (ECC) and modulation encoding, signal preprocessing, data storage in the holographic system, hologram retrieval, signal post processing, binary detection, and decoding of the interleaved ECC.

The ECC encoder adds redundancy to the data in order to provide protection from various noise sources. The ECC-encoded data are then passed on to a modulation encoder, which adapts the data to the channel: It manipulates the data into a form less likely to be corrupted by channel errors and more easily detected at the channel output. The modulated data are then input to the SLM and stored in the recording medium. On the retrieving side, the CCD returns pseudo-analog data values (typically camera count values of eight bits), which must be transformed, back into digital data (typically one bit per pixel). The first step in this process is a post-processing step, called equalization, which attempts to undo distortions created in the recording process, still in the pseudo-analog domain. Then the array of pseudo-analog values is converted to an array of binary digital data via a detection scheme. The array of digital data is then passed first to the modulation decoder, which performs the inverse operation to modulation encoding, and then to the ECC decoder. In the next subsections, we discuss several sources of noise and distortion and indicate how the various coding and signal-processing elements can help in dealing with these problems.

• Binary detection

The simplest detection scheme is threshold detection, in which a threshold T is chosen: Any CCD pixel with intensity above T is declared a 1, while those below T are assigned to class 0. However, it is not at all obvious how to choose a threshold, especially in the presence of spatial variations in intensity, and so threshold detection may perform poorly. The following is an alternative.

Within a sufficiently small region of the detector array, there is not much variation in pixel intensity. If the page is divided into several such small regions, and within each region the data patterns are balanced (i.e., have an equal number of 0s and 1s), detection can be accomplished without using a threshold. For instance, in sorting detection, letting N denote the number of pixels in a region, one declares the N/2 pixels with highest intensity to be 1s and those remaining to be 0s. This balanced condition can be guaranteed by a modulation code, which encodes arbitrary data patterns into codeword represented as balanced arrays. Several such codes are reported in. Thus, sorting detection combined with balanced modulation coding provides a means to obviate the inaccuracies inherent in threshold detection. The price that is paid here is that in order to satisfy the coding constraint (forcing the number of 0s and 1s to be equal), each block of N pixels now represents only M bits of data. Since M is typically less than N, the capacity improvement provided by the code must exceed the code rate, r = M/N. For example, for N = 8, there are 70 ways to combine eight pixels such that exactly four are 1 and four are 0. Consequently, we can store six bits of data (64 different bit sequences) for a code rate of 75%. The code must then produce a >33% increase in the number of holographic pages stored, in order to increase the total capacity of the system in bits.

One problem with this scheme is that the array detected by sorting may not be a valid codeword for the modulation code; in this case, one must have a procedure which transforms balanced arrays into valid code words. This is not much of a problem when most balanced arrays of size N are code words, but for other codes this process can introduce serious errors. A more complex but more accurate scheme than sorting is correlation detection, as proposed in. In this scheme, the detector chooses the codeword that achieves maximum correlation with the array of received pixel intensities. In the context of the 6:8 code described above, 64 correlations are computed for each code block, avoiding the six combinations of four 1 and four 0 pixels that are not used by the code but which might be chosen by a sorting algorithm.

• Interpixel interference

Interpixel interference is the phenomenon in which intensity at one particular pixel contaminates data at nearby pixels. Physically, this arises from optical diffraction or aberrations in the imaging system. The extent of interpixel interference can be quantified by the point-spread function, sometimes called a PSF filter. If the channel is linear and the PSF filter is known, the interpixel interference can be represented as a convolution with the original (encoded) data pattern and then “undone” in the equalization step via a filter inverse to the PSF filter (appropriately called deconvolution). Results on deconvolution with data collected on DEMON I at IBM are described in.

Deconvolution has the advantage that it incurs no capacity overhead (code rate of 100%). However, it suffers from mismatch in the channel model (the physics of the intensity detection makes the channel nonlinear), inaccuracies in estimation of the PSF, and enhancement of random noise. An alternative approach to combating interpixel interference is to forbid certain patterns of high spatial frequency via a modulation code. According to the model in, for certain realistic and relatively optimal choices of system parameters (in particular at the Nyquist aperture described above), if one forbids a 1 surrounded by four 0s (in its four neighbors on the cardinal points of the compass), areal density can be improved provided that the modulation code has a rate >0.83. Such a code at rate 8:9 = 0.888 … is described in; in fact, describes such codes of much higher rate, but at the expense of increased complexity.

A code that forbids a pattern of high spatial frequency (or, more generally, a collection of such patterns of rapidly varying 0 and 1 pixels) is called a low-pass code. Such codes constrain the allowed pages to have limited high spatial frequency content. A general scheme for designing such codes is given in, via a strip encoding method in which each data page is encoded, from top to bottom, in narrow horizontal pixel strips. The constraint is satisfied both along the strip and between neighboring strips. Codes that simultaneously satisfy both a constant-weight constraint and a low-pass constraint are given in.

• Error correction

In contrast to modulation codes, which introduce a distributed redundancy in order to improve binary detection of pseudo-analog intensities, error correction incorporates explicit redundancy in order to identify decoded bit errors. An ECC code receives a sequence of decoded data (containing both user and redundant bits) with an unacceptably high raw BER, and uses the redundant bits to correct errors in the user bits and reduce the output user BER to a tolerable level (typically, less than 10­12). The simplest and best-known error-correction scheme is parity checking, in which bit errors are identified because they change the number of 1s in a given block from odd to even, for instance. Most of the work on ECC for holographic storage has focused on more powerful Reed Solomon (RS) codes . These codes have been used successfully in a wide variety of applications for two reasons: 1) They have very strong error-correction power relative to the required redundancy, and 2) their algebraic structure facilitates the design and implementation of fast, low-complexity decoding algorithms. As a result, there are many commercially available RS chips.

In a straightforward implementation of an ECC, such as an RS code, each byte would be written into a small array (say 2 times 4 for 8-bit bytes), and the bytes in a codeword would simply be raster across the page. There might be approximately 250 bytes per codeword. If the errors were independent from pixel to pixel and identically distributed across the page, this would work well. However, experimental evidence shows that the errors are neither independent nor identically distributed. For example, interpixel interference can cause an error event to affect a localized cluster of pixels, perhaps larger than a single byte. And imperfections in the physical components can cause the raw BER to vary dramatically across the page (typically, the raw BER is significantly higher near the edges of the page).

Assume for simplicity that our choice of ECC can correct at most two byte errors per codeword. If the code words are interleaved so that any cluster error can contaminate at most two bytes in each codeword, the cluster error will not defeat the error-correcting power of the code. Interleaving schemes such as this have been studied extensively for one-dimensional applications (for which cluster errors are known as burst errors). However, relatively little work has been done on interleaving schemes for multidimensional applications such as holographic recording. One recent exception is a class of sophisticated interleaving schemes for correcting multidimensional cluster errors developed in .

For certain sources of error, it is reasonable to assume that the raw-BER distribution is fixed from hologram to hologram. Thus, the raw-BER distribution across the page can be accurately estimated from test patterns. Using this information, code words can then be interleaved in such a way that not too many pixels with high raw BER can lie in the same codeword (thereby lowering the probability of decoder failure or miscorrection). This technique, known as matched interleaving, introduced in, can yield a significant improvement in user BER.

• Predistortion

he techniques we have described above are variations on existing coding and signal-processing methods from conventional data-storage technologies. In addition, a novel preprocessing technique unique to holographic data storage has been developed at IBM Alma den. This technique, called “predistortion”, works by individually manipulating the recording exposure of each pixel on the SLM, either through control of exposure time or by relative pixel transmission (analog brightness level on the SLM). Deterministic variations among the ON pixels, such as those created by fixed-pattern noise, no uniformity in the illuminated object beam, and even interpixel cross talk, can be suppressed (thus decreasing BER). Many of the spatial variations to be removed are present in an image transmitted with low power from the SLM directly to the detector array. Once the particular pattern of no uniform brightness levels is obtained, the recording exposure for each pixel is simply calculated from the ratio between its current brightness value and the desired pixel brightness.

At low density, raw-BER improvements of more than 15 orders of magnitude are possible. More significantly, at high density, interpixel cross talk (which is deterministic once each data page is encoded) can be suppressed and raw BER improved from 10­4 to 10­12. Figure 12 shows this experimental result, implemented on the DEMON I platform with a square aperture of 2.8 mm × 2.8 mm placed at the Fourier transform plane of the imaging optics. Another use of the predistortion technique is to increase the contrast between the 1 and 0 pixel states provided by the SLM. By using interferometer subtraction while recording the hologram, the amount of light received at the 0 detector pixels can be reduced.

• Gray scale

The previous sections have shown that the coding introduced to maintain acceptable BER comes with an unavoidable overhead cost, resulting in somewhat less than one bit per pixel. The predistortion technique described in the previous section makes it possible to record data pages containing gray scale. Since we record and detect more than two brightness levels per pixel, it is possible to have more than one bit of data per pixel. The histogram of a hologram with six gray-scale levels made possible by the predistortion technique is shown in Figure 13. To encode and decode these gray-scale data pages, we also developed several local-threshold methods and balanced modulation codes.


If pixels take one of g brightness levels, each pixel can convey log2 g bits of data. The total amount of stored information per page has increased, so gray-scale encoding appears to produce a straightforward improvement in both capacity and readout rate. However, gray scale also divides the system's signal-to-noise ratio (SNR) into g ­ 1 parts, one for each transition between brightness levels. Because total SNR depends on the number of holograms, dividing the SNR for gray scale (while requiring the same error rate) leads to a reduction in the number of holograms that can be stored. The gain in bits per pixel must then outweigh this reduction in stored holograms to increase the total capacity in bits.

• Capacity estimation

To quantify the overall storage capacity of different gray-scale encoding options, we developed an experimental capacity-estimation technique. In this technique, the dependence of raw BER on readout power is first measured experimentally. A typical curve is shown in Figure 14. The capacity-estimation technique then produces the relationship between M, the number of holograms that can be stored, and raw BER [Figure 14]. Without the capacity-estimation technique, producing Figure 14 would require an exhaustive series of multiple hologram experiments.


In general, as the raw BER of the system increases, the number of holograms, M, increases slowly. In order to maintain a low user BER (say, 10­12) as this raw-BER operating point increases, the redundancy of the ECC code must increase. Thus, while the number of holograms increases, the ECC code rate decreases. These two opposing trends create an “optimal” raw BER, at which the user capacity is maximized. For the Reed Solomon ECC codes we commonly use, this optimal raw BER is approximately 10­3. By computing these maximum capacities for binary data pages and gray-scale data pages from g = 2 to g = 6, we were able to show that gray-scale holographic data pages provide an advantage over binary encoding in both capacity and readout rate. The use of three gray levels offered a 30% increase in both capacity and readout rate over conventional binary data pages.


4. Associative retrieval :

As mentioned above, volume holographic data storage conventionally implies that data imprinted on an object beam will be stored volumetrically [Figure 15], to be read out at some later time by illumination with an addressing reference beam [Figure 15]. However, the same hologram (the interference pattern between a reference beam and a data-bearing object beam) can also be illuminated by the object beam [Figure 15]. This reconstructs all of the angle-multiplexed reference beams that were used to record data pages into the volume. The amount of power diffracted into each “output” beam is proportional to the 2D cross-correlation between the input data page (being displayed on the SLM) and the stored data page (previously recorded with that particular reference beam). Each set of output beams can be focused onto a detector array, so that each beam forms its own correlation “peak.” Because both the input and output lenses perform a two-dimensional Fourier transform in spatial coordinates, the optical system is essentially multiplying the Fourier transforms of the search page and each data page and then taking the Fourier transform of this product (thus implementing the convolution theorem optically). Because of the volume nature of the hologram, only a single slice through the 2D correlation function is produced (the other dimension has been “used” already, providing the ability to correlate against multiple templates simultaneously).

The center of each correlation peak represents the 2D inner product (the simple overlap) between the input page being presented to the system and the associated stored page. If the patterns, which compose these pages, correspond to the various data fields of a database, and each stored page represents a data record, the optical correlation process has just simultaneously compared the entire database against the search argument. This parallelism gives content-addressable holographic data storage an inherent speed advantage over a conventional serial search, especially for large databases. For instance, if an unhindered conventional “retrieve-from-disk-and-compare” software-based database is limited only by sustained hard-disk readout rate (25 MB/s), a search over one million 1 KB records would take ~40 s. In comparison, with off-the-shelf, video-rate SLM and CCD technology, an appropriately designed holographic system could search the same records in ~30 ms — a 1200× improvement. Custom components could enable 1000 or more parallel searches per second.

For this optical correlation process to represent a database search, the spatial patterns of bright (ON) pixels on the holographic data pages must somehow represent the digital data from fixed-length database fields. The SLM is divided into separate regions, each dedicated to a particular fixed-length field of the database. For example, a two-bit data field might be encoded by four blocks of pixels at a particular point within the SLM page. Such an encoding implements an exact search through the database. By threshold the detected optical signal (essentially an analog quantity), any matching records are identified. Threshold becomes commensurately more difficult, however, when many fields are being searched simultaneously. And when the threshold does not work correctly, completely unrelated records are identified as matches because near matches between pixel block patterns do not represent near matches in encoded data value.

We have developed a novel data-encoding method, which allows similarity or fuzzy searching, by encoding similar data values into similar pixel block patterns. As shown in Figure 16(a), data values are encoded by the position of a block of ON pixels within a vertical track, creating a “slider” (like the control found on a stereo's graphic equalizer, for instance). As an example, the data value 128 might be encoded as a pixel block of height hs, centered within a column of 256 pixels. During the search for data values near 128, the partial overlap between the input slider block [Figure 16(b)] and the stored slider block causes the resulting correlation peak to indicate the similarity between the input query and the stored data. The holographic content-addressable system is optically measuring the inner product between an input data page (containing a pixel block at some position along this slider column), and each stored page (possibly containing a pixel block at the same position in the same slider column). This is the same result that would be produced by cutting holes at nearly the same spot on two sheets of black cardboard, aligning their edges, and then holding them up to a light. The holographic system is merely condensing this partial overlap into a single intensity result, and is performing the same test on a large number of holograms simultaneously.


More compact data representations can be realized by combining both fuzzy and exact search encoding. The higher-order bits would be encoded compactly with binary-type encoding, while the low-order bits remained available for fuzzy searching. This trades search flexibility for more capacity (in terms of fields per database record). By adding a correlation camera to the DEMON I platform, we experimentally demonstrated this fuzzy search encoding.

Figure 16 shows results from a search of a single fuzzy-encoded data field as the input data value approached and then exceeded the stored value. The amplitude response (the square root of measured power as a function of the relative position of the input slider block) formed a triangularly shaped function. The correlation of identical rectangles creates the triangle; the signals add in field amplitude yet are detected in intensity; thus, this triangle shows up after taking the square root of the measured signals. With this fuzzy encoding technique, the analog nature of the optical output becomes an enabling feature instead of a drawback.

To demonstrate high fidelity parallel searching of a holographic content-addressable memory, we stored a small multimedia database in our modified DEMON I system. Each hologram represented one record from an IBM query-by-image-content (QBIC) database. In the QBIC system, searches are performed across feature vectors previously extracted from the images, rather than on the images themselves. Each record included several alphanumeric fields (such as image description and image number) encoded for exact searches, and 64 fuzzy sliders containing the color histogram information (percentage of each given color within the associated image). A separate portion of the SLM page, pixel-matched onto a CCD detector for conventional address-based holographic readout, was encoded with the binary data for the small binary image. One hundred holograms were recorded in a 90-degree-geometry LiNbO3 crystal, with the reference angles chosen so that each reference beam was focused to a unique portion of the correlation camera.

Each search, initiated by a user query, ran under computer control, including display of the appropriate patterns, detection of the correlation peaks (averaging eight successive measurements to reduce detector noise), calibration by hologram strength, identification of the eight highest correlation scores, mapping of correlation bins to reference-beam angle, address-based recall of these eight holograms, decoding of the pixel-matched data pages, and, finally, display of the binary images on the computer monitor. The optical readout portion occupied only 0.25 s of the total ~5-s cycle time. To find images based on color similarity, the 64 sliders were used to input the color histogram information for the upper left image in Figure 17. The slider patterns for this color histogram were input to the system on the SLM, resulting in 100 reconstructed reference beams. After detection, calibration, and ranking of these 100 correlation peaks, the reference beams for the brightest eight were input to the system again, resulting in eight detected data pages and thus eight decoded binary images. Figure 17 shows the first four of these images, indicating that the holographic search process found these images to be those which most closely matched the color histogram query. Figure 17 quantifies the search fidelity by plotting the detected correlation peak intensity as a function of the overlap between the object-beam search patterns. Perfect system performance would result in a smooth monotonic curve; however, noise in the real system introduces deviations away from this curve. As expected, the feature vector for the left-hand image correlated strongly with itself, but the system was also able to correctly identify the images with the highest cross-correlation.


These sliders could also be used to select images by color distribution. Figures 17 correspond to a search for images containing 20% white and 20% light gray. Although several images were ranked slightly higher than they deserved (red circle), the system performance was impressive, considering that the background “dark” signal was twice as large as the signal. In Figures 17, the alphanumeric description field was used to search for the keyword shore. Note that because many characters are involved, both the expected and measured scores are large. However, we obtained similar results for exact search arguments as small as a single character.

With the fuzzy coding techniques we have introduced, volume holographic content-addressable data storage is an attractive method for rapidly searching vast databases with complex queries. Areas of current investigation include implementing system architectures, which support many thousands of simultaneously searched records, and quantifying the capacity­ reliability tradeoffs.


5. Recording materials :

Materials and media requirements for holographic data storage

Thus far, we have discussed the effects of the hardware, and of coding and signal processing, on the performance of holographic data storage systems. Desirable parameters described so far include storage capacity, data input and output rates, stability of stored data, and device compactness, all of which must be delivered at a specified (very low) user BER. To a large extent, the possibility of delivering such a system is limited by the properties of the materials available as storage media. The connections between materials properties and system performance are complex, and many tradeoffs are possible in adapting a given material to yield the best results. Here we attempt to outline in a general way the desirable properties for a holographic storage medium and give examples of some promising materials.

Properties of foremost importance for holographic storage media can be broadly characterized as “optical quality,” “recording properties,” and “stability.” These directly affect the data density and capacity that can be achieved, the data rates for input and output, and the BER.

As mentioned above, for highest density at low BER, the imaging of the input data from the SLM to the detector must be nearly perfect, so that each data pixel is read cleanly by the detector. The recording medium itself is part of the imaging system and must exhibit the same high degree of perfection. Furthermore, if the medium is moved to access different areas with the readout beam, this motion must not compromise the imaging performance. Thus, very high standards of optical homogeneity and fabrication must be maintained over the full area of the storage medium. With sufficient materials development effort and care in fabrication, the necessary optical quality has been achieved for both inorganic photo refractive crystals and organic photopolymer media. As discussed above, phase-conjugate readout could ultimately relax these requirements.

A more microscopic aspect of optical quality is intrinsic light scattering of the material. The detector noise floor produced by scattering of the readout beam imposes a fundamental minimum on the efficiency of a stored data hologram, and thus on the storage density and rate of data readout. Measurements on the PRISM tester have shown that, in general, the best organic media have a higher scattering level than inorganic crystals, by about a factor of 100 or more.

Because holography is a volume storage method, the capacity of a holographic storage system tends to increase as the thickness of the medium increases, since greater thickness implies the ability to store more independent diffraction gratings with higher selectivity in reading out individual data pages without cross talk from other pages stored in the same volume. For the storage densities necessary to make holography a competitive storage technology, a media thickness of at least a few millimeters is highly desirable. In some cases, particularly for organic materials, it has proven difficult to maintain the necessary optical quality while scaling up the thickness, while in other cases thickness is limited by the physics and chemistry of the recording process.

Holographic recording properties are characterized in terms of sensitivity and dynamic range. Sensitivity refers to the extent of refractive index modulation produced per unit exposure (energy per unit area). Diffraction efficiency (and thus the readout signal) is proportional to the square of the index modulation times the thickness. Thus, recording sensitivity is commonly expressed in terms of the square root of diffraction efficiency, eta:

Seta2 = (eta½) / (Iscript lt),

(1)

where I is the total intensity, script lis the medium thickness, and t is the exposure time; this form of sensitivity is usually given in units of cm/J. Since not all materials used are the same thickness, it is a more useful comparison to define a modified sensitivity given by the usual sensitivity times the thickness:

S’eta2 = Seta2×script l.

(2)

This quantity has units of cm2/J and can be thought of as the inverse of the writing fluency required to produce a standard signal level. The unprimed variable, Seta2, might be used to convey the potential properties of a storage material, given that the particular sample under test is extremely thin; in contrast, S’eta2 quantifies the ability of a specific sample to respond to a recording exposure.

For high output data rate, one must read holograms with many pixels per page in a reasonably short time. To read a mega pixel hologram in about 1 ms with reasonable laser power and to have enough signal at the detector for low error rate, a diffraction efficiency around eta= 3 × 10­5 is required. To write such a hologram in 1 ms, to achieve input and output data rates of 1 Gb/s, the sensitivity for this example must be at least S’eta2 = 20 cm2/J.

The term dynamic range refers to the total response of the medium when it is divided up among many holograms multiplexed in a common volume of material; it is often parameterized as a quantity known as M# (pronounced “M-number”), where

M# = summationeta½,

(3)

and the sum is over the M holograms in one location. The M# also describes the scaling of diffraction efficiency as M is increased, i.e.,

eta= (M#/M) 2.

(4)

Dynamic range has a strong impact on the data storage density that can be achieved. For example, to reach a density of 100 bits/µm2 (64 Gb/in.2) with mega pixel data pages, a target diffraction efficiency of 3 × 10­5, and area at the medium of 0.1 cm2 would require M# = 5, a value that is barely achievable with known recording materials under exposure conditions appropriate for recording high-fidelity data holograms.

Stability is a desirable property for any data storage system. In the case of holographic storage, the response of the recording medium, which converts the optical interference pattern to a refractive index pattern (the hologram), is generally linear in light intensity and lacks the response threshold found in bistable storage media such as magnetic films. In the case of write-once-read-many (WORM) media such as photopolymers, the material response is irreversible; once the material has been fully exposed, further optical irradiation produces no further response, and the readout beam can interrogate the data without erasing it or distorting it. Much basic research in holographic storage has been performed using photo refractive crystals as storage media (e.g., the experiments described above). Of these crystals, Fe-doped lithium niobate has been the workhorse. Its sensitivity is sufficient for demonstration purposes, but lacks a factor of 100 for practical application. Since photorefractives are reversible materials, they suggest the possibility of a rewritable holographic storage medium.

However, because they are linear and reversible, they are subject to erasure during readout. Several schemes have been investigated for stabilizing or “fixing” the recording so that the data can be read without erasure. One scheme that does this without compromising the ability to erase the data, known as two-color recording, has received a good deal of attention recently. Recording is enabled by simultaneous irradiation of the crystal by a gating beam of different wavelength than the usual object and reference beams. In the absence of the gating wavelength, the data can be read without causing erasure. More details are given in the next section.

Stability in the dark over long periods is also an issue; organic photopolymer materials are often subject to aging processes caused by residual reactive species left in the material after recording or by stresses built up in the material during recording. Erasure may occur because of residual thermal diffusion of the molecules, which record the hologram. Index modulation in photorefractives results from a space charge that is built up by the optical excitation and migration of mobile charge carriers. Stability in the dark depends on the trapping of these carriers with trap energies that are not thermally accessible at room temperature.

• Summary of polymer work

Polymer materials are important candidates for holographic storage media. They promise to be inexpensive to manufacture while offering a wide variety of possible recording mechanisms and materials systems. The opportunity for fruitful development of polymer holographic media is thus very broad, and a variety of approaches to using organic materials for holography have been pursued, including organic photo refractive materials], triplet-sensitized photo chromic systems], photo-addressable polymers, and materials which produce index modulation via material diffusion. Of the latter class, PQ/PMMA is a polymer glass in which a photoreaction binds the phenanthrenequinone chromophore to the PMMA. During a thermal treatment, typically for about 24 hours, unbound PQ diffuses, and the resulting concentration gradients are frozen in place by a final uniform illumination that binds the remaining unreacted chromophore to the PMMA backbone, leading to a fixed hologram. This material has the excellent optical quality of the PMMA matrix, it is available in reasonable thickness, and its sensitivity, while somewhat low, is reasonably good. However, the current need for lengthy thermal treatment makes it unacceptable for most storage applications.

The diffusion-driven photopolymer systems offer very high sensitivity and need no such post exposure processing. The basic mechanism is a photosensitized polymerization, coupled with diffusion of monomer and other components of the material formulation under influence of the resulting concentration gradients. The medium is usually partially prepolymerized to produce a gel-like matrix, allowing rapid diffusion at room temperature. Refractive index modulation and recording of holograms result from both the density change and the difference in polarizability of the polymerized material.

The magnitude of this refractive index modulation can be very high, resulting in a high dynamic range. For simple plane-wave holograms, an M# as high as 42 has been observed. For digital data holograms, the contrast of the interference pattern between object and reference beams is lower than in the plane-wave case, and the recording conditions do not produce as large an index modulation. Even so, the M# observed for digital holograms on the PRISM materials tester is around 1.5, one of the highest yet observed; this value can undoubtedly be improved by optimization of the recording conditions.

The recording mechanism for photopolymers also leads to some disadvantages, includeng the shrinkage of the material with polymerization and the possibility of nonlinear response. Both of these distort the reconstructed holograms and thus cause errors in decoding the digital data. For some photopolymers, significant advances have been made toward eliminating these undesired properties; for example, shrinkage has been reduced to less than 0.1% while sufficient useful dynamic range for recording of data has been retained. There are additional problems in increasing the thickness of these materials to the millimeter scale that is desirable for holography, and even then the Bragg angle selectivity is not sufficient to allow enough holograms to be written in a common volume to achieve high data density.

However, through the use of nonselective multiplexing methods, it is possible to increase the density to a competitive level. One of these methods, known as per strophic multiplexing, involves the rotation of the medium about an axis normal to its plane such that the reconstructed hologram image rotates away from the detector, allowing another hologram to be written and read. We have recently demonstrated the recording and readout with very low error rate of 70 holograms of 256 Kb each on the PRISM tester, using a ombination of Bragg angle and per strophic multiplexing.

Photopolymer materials have undergone rapid development and show great potential as write-once holographic media. Because of this rapid development, there is relatively little research addressing the issue of long-term data integrity and stability after recording. Work in this area is ongoing.

Another class of organic materials undergoing rapid development is the photo-addressable polymer systems. These systems incorporate azo-dye chromophores that are highly optically anisotropic and that undergo optically induced reorientation. Thus, optical irradiation produces a large refractive index change through the birefringence induced by this reorientation process. Incorporating the chromophores into a polymer matrix containing liquid crystal components can stabilize the index change. At this point, these materials lack a convenient means of desensitization once the data have been written, so that they do not saturate and overwrite the holograms during readout. However, the index change available via this mechanism is very large; a recording medium of this type could have very high dynamic range, and thus the potential for high data storage density, and perhaps be reversible, thus enabling rewritable storage.

The best of the photopolymers are promising as storage media for WORM data storage. The photo refractive crystals have traditionally been the favorite candidates for reversible, rewritable storage; recent work on two-color recording has shown the way to a possible solution of the volatility of reversible media during readout. The following section describes this concept.

• Two-color or photon-gated holography

Two main schemes for providing nondestructive readout have been proposed, both in lithium niobate, although the concepts are applicable to a broader range of materials. The first was thermal fixing, in which a copy of the stored index gratings is made by thermally activating proton diffusion, creating an optically stable complementary proton grating. Because of the long times required for thermal fixing and the need to fix large blocks of data at a time, thermally fixed media somewhat resemble reusable WORM materials. Another class of fixing process uses two wavelengths of light. One approach uses two different wavelengths of light for recording and reading, but for storage applications this suffers from increased cross talk and restrictions on the spatial frequencies that can be recorded.

The most promising two-color scheme is “photon-gated” recording in photo refractive materials, in which charge generation occurs via a two-step process. Coherent object and reference beams at a wavelength lambda1 record information in the presence of gating light at a wavelength lambda2. The gating light can be incoherent or broadband, such as a white-light source or LED. Reading is done at lambda1 in the absence of gating light. Depending on the specific implementation, either the gating light acts to sensitize the material, in which case it is desirable for the sensitivity to decay after the writing cycle, or the gating light ionizes centers in which a temporary grating can be written at the wavelength lambda1.

Reduced stoichiometric lithium niobate shows both one-color sensitivity in the blue-green spectral region and two-color sensitivity for writing in the near IR and gating with blue-green light. From this it can be seen that the gating light also produces erasure. This is a consequence of the broad spectral features of reduced or Fe-doped lithium niobate. Considerable progress is envisaged if a better separation of gating and erasing functions can be achieved by storing information in deeper traps and/or using wider-band gap materials. Figure 19 compares one-color and two-color writing in a sample of reduced, near-stoichiometric lithium niobate to illustrate the nondestructive readout that can be achieved. The gating ratio in this case was in excess of 5000.



Conventionally, lithium niobate is grown in the congruent melting composition, expressed by the quantity cLi = [Li]/([Li] + [Nb]) = 48.5%, because the identical compositions of the melt and the crystal promote high optical quality and large boules. Crystals of nominally undoped lithium niobate, grown with a stoichiometry (SLN) of 49.7% by a special double-crucible technique, were compared with those of the congruent composition (CLN). Strong differences were observed, as shown in Table 2. Materials were evaluated in a plane-wave geometry in which two collimated 852-nm beams from a single-frequency diode laser were incident on the sample at an external crossing angle of 20 degrees. Gating light was provided either by an Ar+ laser at 488 nm or by several GaN LEDs. Further details of the experimental setup were recently published.


Table 2 Summary of data and comparison of two-color and one-color results, for stoichiometric (SLN) and congruent (CLN) lithium niobate.

Material

Recording scheme

Fe concentration
(ppm)

103 Seta2
(incident)
(cm
2/J)

103 Seta1
(absorbed)
(cm
2/J)

M#/cm**

Gating ratio @ 852 nm

Reduced
SLN

Two-color*
852 + 488

1.0

8

160

0.8

1600

Reduced
SLN + Fe

Two-color*
852 + 488

100

9

150

0.5

10,000

CLN

Two-color*
852 + 488

residual

0.02

>20

0.05

Reduced
CLN + Fe

One-color
488 nm

200

100

170

24

N/A

*Iw = 4 W/cm2, 852 nm; Ig = 1 W/cm2, 488 nm; Lambda= 6 µm; E parallel to c-axis.
**For plane-wave, small-angle geometry.

Reduction of lithium niobate (heat treatment in an oxygen-poor atmosphere) induces a broad visible absorption band. This band is attributed primarily to absorption by a bipolar on consisting of an electron trapped on a regular Nb site and another trapped at a NbLi antisite, together with a strong lattice distortion. In addition, there is some contribution to the band from residual impurities such as Fe2+. Irradiating with blue-green light is the gating or sensitizing step, which produces a transient absorption around 1.6 eV. This absorption is assigned to a small polaron, or electron trapped at NbLi, produced by dissociation of the bipolar on, and is responsible for the sensitivity at 852 nm.

As we have seen, the most important photo refractive properties for two-color holographic data storage are the gating ratio (measuring the degree of no volatility), sensitivity, M# or dynamic range, dark decay, and optical quality. Table 2 shows most of these properties for stoichiometric and congruent compositions compared to the behavior of conventional one-color Fe-doped lithium niobate. Photorefractive sensitivity for two-color recording in lithium niobate is linear in the gating light intensity, Ig, only at low values of Ig because of competition between gating and erasing. Hence, the sensitivity in terms of incident intensities Seta2 is defined similarly to that for one-color processes [see Equation (2)], but for a fixed and reasonably low value of Ig = 1 W/cm2. The sensitivity in terms of absorbed power is Seta1 = absorption coefficient at the writing wavelength. In terms of this sensitivity, all samples studied, including the single photon Fe-doped material written at 488 nm, are almost equally sensitive. This suggests that the sensitivity is determined by the amount of light that can be absorbed at the writing wavelength. So far, the maximum absorption of writing light that we have found in reduced SLN is 6% for Ig = 1 W/cm2.

summarizing the results of Table 2, the sensitivity gains for two-color recording in reduced, nearly stoichiometric lithium niobate with respect to the congruent material are 15× for increased stoichiometry and 20× for degree of reduction. In addition, lowering the gating wavelength from 520 nm to 400 nm gains a further factor of 10, and cooling from 20°C to 0°C a factor of 5.

here is an interesting difference in the behavior of one- and two-color materials with regard to dynamic range. In a one-color material, the M# is proportional to the modulation index or fringe visibility of the optical interference pattern, m = 2(I1I2)½/(I1 + I2). However, in a two-color material, the writing light (I1 + I2) does not erase the hologram, and the M# is proportional to (I1I2)½. As a result, for object and reference beams of equal intensity, the M# is proportional to the writing intensity. While this provides a general way of increasing the dynamic range in a two-color material, the writing power requirements in the present material system become rather high in order to achieve a substantial increase in M#.

Instead of amplifying the role of the intrinsic shallow levels with stoichiometry, an alternative scheme for implementing two-color holography in lithium niobate is the introduction of two impurity dopants. One trap, such as Mn, serves as the deep trap from which gating occurs, while a shallower trap, such as Fe, provides the more shallow intermediate level for gated recording. While this scheme provides more opportunities for tuning through choice of dopants, in general it is difficult in LiNbO3 to separate the two absorption bands enough to provide high gating ratios and thus truly nonvolatile storage. In addition, while M# improves monotonically with writing intensity for stoichiometric lithium niobate, with the two-trap method M# is maximized at a particular writing intensity, thus creating an undesirable tradeoff between recording rate and dynamic range. Two-color, photon-gated holography provides a promising solution to the long-standing problem of destructive readout in read/write digital holographic storage. In lithium niobate, optimization of the sensitivity requires control over stoichiometry (or doping), degree of reduction, temperature, gating wavelength, and gating intensity. Two-color materials differ fundamentally from one-color materials in that the dynamic range or M# can be increased by using higher writing intensity, and the sensitivity can be increas with higher gating intensity. Another route to increasing the M# would be to find a material which exhibits a two-color erase process. Substantial progress has been made in recent years in the field of two-color holography, and further progress can be expected on this complex and challenging problem.

6. Conclusion :

Holographic data storage has several characteristics that are unlike those of any other existing storage technologies. Most exciting, of course, is the potential for data densities and data transfer rates exceeding those of magnetic data storage. In addition, as in all other optical data storage methods, the density increases rapidly with decreasing laser wavelength. In contrast to surface storage techniques such as CD-ROM, where the density is inversely proportional to the square of the wavelength, holography is a volumetric technique, making its density proportional to one over the third power of the wavelength. In principle, laser beams can be moved with no mechanical components, allowing access times of the order of 10 µs, faster than any conventional disk drive will ever be able to randomly access data. As in other optical recording schemes, and in contrast to magnetic recording, the distances between the “head” and the media are very large, and media can be easily removable. In addition, holographic data storage has the capability of rapid parallel search through the stored data via associative retrieval.

On the other hand, holographic data storage currently suffers from the relatively high component and integration costs faced by any emerging technology. In contrast, magnetic hard drives, also known as direct access storage devices (DASD), are well established, with a broad knowledge base, infrastructure, and market acceptance. There are four scenarios conceivable for holographic data storage, where its unique combination of technical characteristics could come to bear and overcome the thresholds faced by any new storage technology.


TO DOWN LOAD REPORT AND PPT

DOWNLOAD