FLUID FLOW VISUALIZATION report.doc (Size: 2 MB / Downloads: 590)
FLUID FLOW VISUALIZATION.ppt (Size: 682.5 KB / Downloads: 657)
FLUID FLOW VISUALIZATION
The flow of air cannot be seen by the naked eye. The flow of water can be seen but not its streamlines or velocity distribution. The consolidated science which analyses the behavior of fluid invisible to the eye like this as image information is called Ëœflow visualizationâ„¢, and it is extremely useful for clarifying fluid phenomena. The saying Ëœseeing is believingâ„¢ most aptly expresses the importance of flow visualization.
This report presents an overview of techniques for visualization of fluid flow data.
The popular techniques Temperature Sensitive Paint, Pressure Sensitive Paint, Tuft Method, Hydrogen Bubble, Optical Methods, and Particle Imaging Velocimetry are explained in details. The figures for various techniques have been presented at the end of this report.
This report mainly concentrated on the experimental fluid flow visualization. Still important computer aided visualization methods have summarized, like the particle imaging Velocimetry. A summary of all the important techniques have been presented at the end of the report. Introduction to the computer graphics flow visualization have been made so that the reader can have a basic idea of the techniques of graphics visualization.
Purposes and Problems of Flow Visualization
Flow visualization probably exists as long as fluid flow researches itself. Until recently,
Experimental flow visualization has been the main visualization aid in fluid flow research. Experimental flow visualization techniques are applied for several reasons:
Â¢ To get an impression of fluid flow around a scale model of a real object, without any
Â¢ As a source of inspiration for the development of new and better theories of fluid flow;
Â¢ To verify a new theory or model.
Though used extensively, these methods suffer from some problems. A fluid flow is often affected by the experimental technique, and not all fluid flow phenomena or relevant parameters can be visualized with experimental techniques. Also, the construction of small scale physical models and experimental equipment such as wind tunnels are expensive, and experiments are time consuming.
Recently a new type of visualization has emerged: computer-aided visualization. The increase of computational power has led to an increasing use of computers for numerical simulations. In the area of fluid dynamics, computers are extensively used to calculate velocity fields and other flow quantities, using numerical techniques to solve the governing Navier-Stokes equations. This has led to the emergence of Computational Fluid Dynamics (CFD) as a new field of research and practice.
To analyze the results of the complex calculations, computer visualization techniques are necessary. Humans are capable of understanding much more information when it is shown visually, rather than numerically. By using the computer not only for calculating the numerical data, but also for visualizing these data in an understandable way, the benefits of the increasing computational power are much greater.
The visualization of fluid flow simulation data may have several different purposes. One purpose is the verification of theoretical models in fundamental research. When a flow phenomenon is described by a model, this flow model should be compared with the Ëœrealâ„¢ fluid flow. The accuracy of the model can be verified by calculation and visualization of a flow with the model, and comparison of the results with experimental results. If the numerical results and the experimental flow are visualized in the same way, a qualitative verification by visual inspection can be very effective. Research in numerical methods for solving the flow equations can be supported. By visualizing the solutions found, but also by visualization of intermediate results during the iterative solution process.
Another purpose of fluid flow visualization is the analysis and evaluation of a design. For the design of a car, an aircraft, a harbor, or any other object that is functionally related with fluid flow, calculation and visualization of the fluid flow phenomena can be a powerful tool in design optimization and evaluation. In this type of applied research, communication of flow analysis results to others, including non-specialists, is important in the decision making process.
In practice, often both experimental and computer-aided visualization will be applied. Fluid flow visualization using computer graphics will be inspired by experimental visualization. Following the development of 3D flow solution techniques, there is especially an urgent need for visualization of 3D flow patterns. This presents many interesting but still unsolved problems to computer graphics research. Flow data are different in many respects from the objects and surfaces traditionally displayed by 3D computer graphics. New techniques are emerging for generating informative images of flow patterns; also, techniques are being developed to transform the flow visualization problem to display of traditional graphics primitives.
Experimental Flow Visualization
1. Pressure Sensitive Paint (psp) And Temperature Sensitive Paint (tsp)
The use of luminescent molecular probes for measuring surface temperature and pressure on wind tunnel models and flight vehicles offers the promise of enhanced spatial resolution and lower costs compared to traditional techniques. These new sensors are called temperature-sensitive paint (TSP) and pressure-sensitive paint (PSP).
Traditionally, arrays of thermocouples and pressure taps have been used to obtain surface temperature and pressure distributions. These techniques can be very labor-intensive and model/flight vehicle preparation costs are high when detailed maps of temperature and pressure are desired. Further, the spatial resolution is limited by the number of instrumentation locations chosen. By comparison, the TSP and PSP techniques provide a way to obtain simple, inexpensive, full-field measurements of temperature and pressure with much higher spatial resolution. Both TSP and PSP incorporate luminescent molecules in a paint which can be applied to any aerodynamic model surface. Figure 1 shows a schematic of a paint layer incorporating a luminescent molecule.
The paint layer is composed of luminescent molecules and a polymer binder material. The resulting Ëœpaintâ„¢ can be applied to a surface using a brush or sprayer. As the paint dries, the solvent evaporates and leaves behind a polymer matrix with luminescent molecules embedded in it. Light of the proper wavelength to excite the luminescent molecules in the paint is directed at the model and luminescent light of a longer wavelength is emitted by the molecules Using the proper filters, the excitation light and luminescent emission light can be separated and the intensity of the luminescent light can be determined using a photo detector. Through the photo-physical processes known as thermal- and oxygen quenching, the luminescent intensity of the paint emission is related to temperature or pressure. Hence, from the detected luminescent intensity, temperature and pressure can be determined. The polymer binder is an important ingredient of a luminescent paint used to adhere the paint to the surface of interest. In some cases, the polymer matrix is a passive anchor. In other cases, however, the polymer may affect significantly the photo physical behavior of the paint through a complicated interaction between the luminescent molecules and the macro-molecules of the polymer. A good polymer binder should be robust enough to sustain skin friction and other forces on the surface of an aerodynamic model. Also, it must be easy to apply and repair to the surface in a smooth, thin film.
For TSP, many commercially available resins and epoxies can be chosen serve as polymer binders if they are not oxygen permeable and do not degrade the activity of the luminophore molecules. In contrast, a good polymer binder for a PSP must have high oxygen permeability besides being robust and easy to apply.
The CCD camera system for luminescent paints is the most commonly used in aerodynamic testing. A schematic of this system is shown in Figure 2. The luminescent
Paint (TSP or PSP) is coated on the surface of the model. The paint is excited to
Luminance by the illumination source, such as a lamp or a laser. The luminescent intensity Image is filtered optically to eliminate the illuminating light and then captured by a CCD
Camera and transferred to a computer with a frame grabber board for image processing.
Both wind-on image (at temperature and pressure to be determined) and wind-off image
(At a known constant temperature and pressure) are obtained. The ratio between the wind on and wind-off images is taken after the dark current level image is subtracted from both
Images, yielding a relative luminescent intensity image. Using the calibration relations, the surface temperature and pressure distributions can be computed from the relative
Luminescent intensity image.
TSP has also been utilized as an approach to flow transition detection since convective heat transfer is much higher in turbulent flow than in laminar flow, TSP can visualize the surface temperature difference between turbulent and laminar regions. In low speed wind tunnel tests, the model is typically heated or cooled to enhance temperature variation across the transition line.
The PSP/TSP technique provides a promising tool for measuring surface pressure
distributions on a high-speed rotating blade at a high spatial resolution. Instrumentation is
Particularly difficult in the rotating environment and the pressure taps weaken the
Structure of the rotating blade. Recently, a test was performed to measure the chord wise pressure distributions on the rotor blades of a high speed axial flow compressor/ TSP (Ru(bpy)-Shellac) and PSP (Ru(ph2-phen) in GE RTV 118) were applied to alternating blades. The TSP provided the temperature distributions on the blades for
temperature correction of the PSP results. A scanning laser system was used for
excitation and detection of luminescence. Both the TSP and PSP were excited with an
Argon laser and luminescence was detected with a Hamamatsu PMT. The same system
was used on an Allied Signal F109 gas turbine engine giving the suction surface
pressure map at 14000 rpm shown in Figure 3
Figure 3: Fan blade pressure distribution at 14,000rpm
Characteristics of PSP:
As mentioned previously, PSP simply consists of a luminescent molecule suspended in some type of oxygen permeable binder. Currently, the majority of these binders are some form of silicone polymer. The vast majority of PSP formulations to date come in a liquid form that is suitable for use with normal
Spray-painting equipment and methods
Typically, in its simplest application, PSP is the topmost layer of a multilayer coating on a model surface. The PSP is usually applied over a white undercoat, which provides two related benefits. The white undercoating reflects a large portion of the light that is incident upon it, which results in the benefits of amplifying not only the excitation illumination, but the emission illumination as well.
As previously mentioned, pressure sensitive paints are used to measure surface pressures. The conventional methods of measuring these pressures are to apply pressure taps or transducers to a model, but these approaches have some significant disadvantages.
First of all, taps and transducers only allow measurements at discrete points on the model surface. The surface pressures at other locations on the model can only be interpolated from the known points. Another disadvantage is that taps and transducers are intrusive to the flow. Measurements cannot be taken downstream of other taps or transducers, since the flow is altered once it passes over the upstream disturbances. Finally, taps and transducers are time-consuming and expensive to use. Typical models used for determining surface loads in aircraft design typically cost $500,000 to $1 million, with approximately 30% of that cost going towards the pressure taps and their installation.
A relatively new method to surface pressure measurement utilizes pressure sensitive paint, or PSP. Pressure sensitive paint has numerous advantages over the more conventional pressure taps and transducers. The most obvious is that PSP is a field measurement, allowing for a surface pressure determination over the entire model, not just at discrete points. Hence, PSP provides a much greater spatial resolution than pressure taps, and disturbances in the flow are immediately observable.
PSP also has the advantage of being a non-intrusive technique. Use of PSP, for the most part, does not affect the flow around the model, allowing its use over the entire model surface. The use of PSP eliminates the need for a large number of pressure taps, which leads to more than one benefit. Since pressure taps do not need to be installed, models can be constructed in less time, and with less money than before. Also, since holes do not need to be drilled in the model for the installation of taps, the model strength is increased, and higher Reynolds numbers can be obtained. Not only does the PSP method reduce the cost of the model construction, but it also reduces the cost of the instrumentation needed for data collection. In addition, the equipment needed for PSP costs less than pressure taps, but it can also be easily reused for numerous models.
In aircraft design, PSP has the potential to save both time and money. The continuous data distribution on the model provided by PSP can easily be integrated over specific components, which can provide detailed surface loads. Since a model for use with the PSP technique is faster to construct, this allows for load data to be known much earlier in the design process.
Unfortunately, PSP is not without its undesirable characteristics. One of these characteristics is that the response of the luminescent molecules in the PSP coating degrades with time of exposure to the excitation illumination. This degradation occurs because of a photochemical reaction that occurs when the molecules are excited. Eventually, this degradation of the molecules determines the useful life of the PSP coating. This characteristic becomes more important for larger models, as the cost and time of PSP reapplication becomes a significant factor.
A second undesirable characteristic of PSP is that the emission intensity is affected by the local temperature. This behavior is due to the effect temperature has on the energy state of the luminescent molecules, and the oxygen permeability of the binder. This temperature dependence becomes even more significant in compressible flow tests, where the recovery temperature over the model surface is not uniform.
As seen below, the PSP experimental setup is composed of a number of separate elements. The specifications of each element are dependent upon the test conditions, objectives, and budget.
Typical PSP experimental setup
The illumination element ("light source") of the setup is used to excite the luminescent molecules in the PSP coating. Since the intensity of the emitted illumination is proportional to the excitation illumination, the source of illumination must be of sufficient power in the absorption spectrum of the PSP coating, and also have a stable output over time. For complex models with numerous surfaces, multiple illumination elements are often needed to achieve an adequate coverage of the model surface. Some examples of illumination elements are lasers, continuous and flash arc lamps, and simple incandescent lamps.
The imaging element ("camera") used in the experimental setup is heavily dependent upon the required results. In most cases, a good spatial resolution of the pressure distribution is required. Imaging elements that can provide a good spatial resolution include conventional still photography, low-light video cameras, or scientific grade CCD cameras. In most PSP applications, the electronic CCD cameras are the preferred imaging element due to their good spatial resolution and capability to reduce the data they acquire in real time. CCD cameras can be divided into two groups, conventional black and white video cameras and scientific grade CCD digital cameras.
Conventional black and white video cameras are attractive mainly due to their low cost. Typical cameras deliver an 8-bit intensity resolution over a 640 X 480 pixel spatial resolution. Even though conventional black and white video cameras are not precision scientific instruments, when coupled with a PC image processor, the results obtained are more than acceptable for qualitative analysis, and are potentially acceptable for quantitative analysis in certain conditions.
Scientific grade cooled CCD digital cameras, on the other hand, are precision scientific instruments that provide high-precision measurements, at the price of an increased cost. Typical cameras of this type can exhibit 16-bit intensity resolution and spatial resolution up to 2048 X 2048 pixels. For many PSP applications, the high resolution provided by these cameras is mandatory.
Images taken of pressures on an AIM-54 Phoenix missile separating from an F-14 fighter
In order to avoid erroneous illumination readings, it is necessary that the illumination element only output in the absorption spectrum, while the imaging element only records the emission spectrum. When lasers are used for excitation purposes, this is not an issue, as a laser only produces light in one wavelength. Most excitation sources, however, produce light in a wide spectrum. In order to prevent the excitation source spectrum from overlapping the emission spectrum, optical filters are placed over both the illumination element and the imaging element. This constraint also makes it necessary to conduct all PSP testing in a darkened test section; otherwise ambient light may contaminate the readings.
Data Acquisition & Post Processing:
The data acquisition and post processing in most PSP applications is done in a modular fashion. Initially the camera and computer acquire images for wind-on and wind-off conditions. These images can then be corrected and processed as necessary, either on the same or different machine. This modular approach provides a benefit in that the processing for small-scale tests can easily be done with common software running on PCs. In larger-scale facilities, however, much more computing power is needed, as runs can easily produce large amounts of data that need to be processed. This leads to the requirement of high power graphics workstations and high capacity storage facilities. It is also important to note that in the false color is typically added to the images in the post-processing phase in order to facilitate flow visualization (PSP is monochromatic).
This method makes use of the contrast obtained on account of the unequal rates of evaporation of a liquid film in the laminar and turbulent regions. A film of some volatile oil is applied on the surface of the model prior to starting of the flow. When the air flow takes place over this surface the evaporation of the oil film is faster in the turbulent than in the laminar region. A clearer contrast is obtained by using black paint on the surface. This method can be easily employed for aerofoil blade surfaces in wind tunnels.
Dense smoke introduced into a flow filed be the flow by the smoke generator can make stream line pattern visible. Smoke is generally injected into the flow through an array of nozzles or holes.
Kerosene oil can be conveniently used in smoke generator. The oil is heated to its boiling point by an electric coil: the smoke is formed by introducing the vapors into the air stream. Smoke can also be produced by many other methods. For better results smoke should be light, non-poisonous and free of deposits.
4) Hydrogen Bubbles
A very easy and effective method to visualize flow fields is the electrolytic generation of hydrogen bubbles with a platinum cathode (d 50 Ã‚Âµm) on the model/ flow field. Typically, depending on cathode size, bubbles are very small, approximately 0.1 mm in diameter and are therefore very responsive such that they can completely trace the flow over a body or a complex flow field. Light sheet illumination produces internal reflection within the bubbles and hence visualization of the flow field. This method has general advantages over other methods because there is no contamination of the working fluid and it is very convenient to use, i.e. on electrical switching on/off. Moreover the cathode can be sized to produce bubbles over as much or as little of the model as necessary.
5. Optical methods
Optical method for studying a flow field is a valuable and widely used technique. The refractive index of the medium (Flow field) and the velocities of light through it are functions of the density field. For a given medium and wavelength of light refractive index is a function of density.
For compressible flows this can be approximately expressed as
n = 1+ÃƒÅ¸ (P/ref)
The basis of all the optical methods of study of compressible flow is the
variation of the density to a lesser or greater degree.
The three optical techniques described here are:
(i) the shadowgraph,
(ii) The interferometer and
(iii) The Schlieren system
Each of these techniques is based on either of the variations shown in the figure.
If the variations of refractive index in the flow field are measured the
Corresponding density variations can be determined from this. Density along
with pressure can yield the values of temperature, velocity of sound, Mach
By using these techniques the flow field can either be observed on a screen or its permanent record is obtained on a photographic plate. The great advantage of optical methods over other methods is that the instruments are not inserted into the flow; thus the flow is undisturbed in this method of flow investigation.
Though the working of these methods is described here with reference to a model in the wind tunnel test section, they can no doubt be used in a variety of other situations. The flow direction in the test section is considered perpendicular (x-direction) to the plane of the paper, while the light beam is parallel to the span (z-direction) of the model an aerofoil or a wedge.
6. Shadow Technique
The arrangement adopted in this technique is shown in Fig. and is often referred to as a shadowgraph. The collimating lens provides a collimated beam of light from the source. This beam passes through the transparent walls of the test section. The shadow picture of the flow is directly obtained on a white screen placed on the other side of the test section. The degree of brightness on the screen is proportional to the second derivative of the density, a2P/ax2 the varying degree of brightness is a measure of the variations in the density field.
This technique gives clear pictures of the density variation in flows with shocks and combustion. The method is convenient because the required equipment is inexpensive and easy to operate.
Shadowgraph visualizes the density distribution in the flow around an axisymmetric model in a supersonic flow (Courtesy High-Speed Laboratory, Dept. of Aerospace Engineering, Delft University of Technology)
7. Interferometer Technique
In this technique the variation of density in the flow field is directly determined from the interference pattern obtained on the screen or a photographic plate.
The most widely used apparatus based on this technique is the Mach-Zehnder interferometer which is shown in figure. It consists of two plane half silvered mirrors (splitters) A and C, and two fully reflecting mirrors B and D. A parallel beam of light is obtained from the light source through the lens L and concave
mirror A. The splitter A reflects a part of this beam to the transparent section of the test section; the rays of light from the test section are reflected by the mirror D to a concave mirror M2 through the splitter C.
The part of the beam from Ml which passes through the splitter A is
reflected by the mirror B to the reference section; this has transparent walls
identical to those of the test section. Therefore these walls act as compensating
plates. The rays from the reference section also reach the concave mirror M2 after reflection from the splitter C. Thus the mirror A/2 collects the rays coming separately from the test section and the reference section and directs them on the screen or a photographic plate. After emerging from the splitter G the two parts of the light beam merges into one single coherent beam before reaching the mirror A/2; thus the pattern of illumination reflected on the screen would be uniform when there is no flow in the test section like the reference section. When the flow is established in the test section the beam of light passing through its density field will be out of phase with the beam coming through the reference section; in this case the mirror M2 will reflect an interference pattern on the screen; this represents the variable density pattern in the flow field.
While the interferometer is suitable for quantitative measurement of the density variation in a flow field it requires expensive equipment which is difficult to operate.
8. Schlieren technique
In this technique the density gradient (dp/dx) in the flow field is obtained
in terms of the varying degree of the brightness on the screen; the degree of brightness or intensity of illumination is proportional to the density gradient in the flow field.
The arrangement adopted in the Schlieren technique is shown in Fig.
A beam of light is sent through the test section from the light source by a
properly oriented concave mirror A/t.
The beam coming from the test section is reflected on to the screen or a photographic plate through two suitably located concave mirrors A/2 and A/3.
A sharp knife edge is inserted at the focal point of the mirror M2 to intercept about half the light. Thus in the absence of flow through the test section
the screen is illuminated uniformly by the light escaping the knife edge, But
in the presence of flow the rays of light are differently deflected (as in a prism)
on account of the variable density and the refractive index in the flow field
Therefore greater or lesser part of the light beam will now escape the knife
edge. This gives a varying intensity of the illumination on the screen.
9. Laser techniques
Application of lasers has provided the most powerful and reliable optical method of measuring velocity, direction and turbulence in liquids and gases over a wide range. In this method a laser beam is focused in the flow field (test section) where measurement of velocity, turbulence etc. is required. The scattered light from the minute solid particles in the flow is utilized as a signal for velocity measurement by employing a number of optical and electronic equipment such as special lenses, beam splitters, photo detectors, signal processors, timing devices, data acquisition system and a computer.
Laser is an acronym for Light Amplification by Stimulated Emission of Radiation; this is a strong source of monochromatic and coherent light. In this light emitted from one atom of the gas is employed lo amplify the original light. Helium-Neon lasers are commonly used in the range 0.5-100 milli watts they have comparatively lower cost and higher degree of reliability. Argon-ion lasers are used in higher power ranges i.e. 5 m watts-15 watts. CO2 lasers have a range of power between 1 and 100 watts. Higher power lasers produce greater noise level in the system which interferes with the signal.
The laser beam has a very high intensity of light, which can be damaging to the eyes and skin. It can also start spontaneous combustion of inflammable material. Therefore proper precautions must be taken while using a laser system. Solid particles in the flow act as scattering points at the measuring stations. If they are too small having density close to that of the flow their velocity can be taken as equal to the flow velocity; this condition is satisfied to a great extent, in liquid flows. In air flows the small naturally occurring solid particles should act as scattering point Very small particles may not produce a signal of sufficient strength and would be lost in the "system noise"; very large size particles
will give erroneous results.
Artificial seeding plant can also be employed to supply solid particles of
the desired size (about one micron).
Advantages of laser techniques
Lasers have wide applications in measurements in turbo machinery, wind tunnels, water tunnels, combustion studies, heat exchangers and many areas in aerospace and nuclear engineering. Their main advantages are:
Â¢ They employ a non-intrusive method which does not disturb the flow.
Â¢ They can measure velocities in regions which are inaccessible by other
devices. They offer a valuable device for boundary layer measurements
Â¢ No calibration is required; their working is independent of pressure
temperature and the density of the fluid.
Â¢ They can be used in a wide range of velocity (0.1-300 m/s); their
frequency response is very high.
Â¢ Velocity varies linearly with the signal over wide range.
Â¢ They can be easily interfaced with a computer.
Some of the disadvantages include their high cost, complex optical and electronic equipment, and requirement of well trained and skilled operators Installation of a laser system requires considerable preparation. In some cases a seeding plant is also needed which further adds to the already prohibitively high cost.
10. Particle Image Velocimetry (PIV)
Particle image Velocimetry is usually a planar laser light sheet technique in which the light sheet is pulsed twice, and images of fine particles lying in the light sheet are recorded on a video camera or a photograph. The displacement of the particle images is measured in the plane of the image and used to determine the displacement of the particles in the flow. The most common way of measuring displacement is to divide the image plane into small interrogation spots and cross correlate the images from the two time exposures. The spatial displacement that produces the maximum cross-correlation statistically approximates the average displacement of the particles in the interrogation cell. Velocity associated with each interrogation spot is just the displacement divided by the time between the laser pulses.
If the velocity component perpendicular to the plane is needed, a stereographic system using two lenses can be used. Typically, the PIV measures on a 100 x 100 grid with accuracy between 0.2% and 5% of full scale and spatial resolution ~1mm. But, special design allow for larger and smaller values. Framing rates of most PIV cameras are of order 10Hz, compatible with pulse rates of Nd: Yag lasers, which is too slow for most cinematic recording. Special systems using rapidly pulsed metal vapor lasers and fast cinematic cameras or special high speed video cameras are able to measure up to ~10,000 frames per second. Micro PIV systems have been constructed to measure velocities in cells as small as a few microns.
Particle Image Velocimetry (PIV) is a whole-flow-field technique providing instantaneous velocity vector measurements in a cross-section of a flow. Two velocity components are measured, but use of a stereoscopic approach permits all three velocity components to be recorded, resulting in instantaneous 3D velocity vectors for the whole area. The use of modern CCD cameras and dedicated computing hardware, results in real-time velocity maps.
Â¢ the technique is non-intrusive and measures the velocities of micron-sized particles following the flow.
Â¢ Velocity range from zero to supersonic.
Â¢ Instantaneous velocity vector maps in a cross-section of the flow.
Â¢ All three components may be obtained with the use of a stereoscopic arrangement
Â¢ With sequences of velocity vector maps, statistics, spatial correlations and other relevant data are available.
Results are similar to computational fluid dynamics, i.e. large eddy simulations, and real-time velocity maps are an invaluable tool for fluid dynamics researchers.
In PIV, the velocity vectors are derived from sub-sections of the target area of the particle-seeded flow by measuring the movement of particles between two light pulses:
The flow is illuminated in the target area with a light sheet. The camera lens images the target area onto the CCD array of a digital camera. The CCD is able to capture each light pulse in separate image frames.
Once a sequence of two light pulses is recorded, the images are divided into small subsections called inter-rogation areas (IA). The interrogation areas from each image frame, I1 and I2, are cross-correlated with each other, pixel by pixel.
The correlation produces a signal peak, identifying the common particle displacement, X. An accurate measure of the displacement - and thus also the velocity is achieved with sub- pixel interpolation
A velocity vector map over the whole target area is obtained by repeating the cross-correlation for each interrogation area over the two image frames captured by the CCD camera.
The correlation of the two interrogation areas, I1 and I2, results in the particle displacement X, represented by a signal peak in the correlation C (X).
PIV images are visual, just follow the seeding
Recording both light pulses in the same image frame to track the movements of the particles gives a clear visual sense of the flow structure. In air flows, the seeding particles are typically oil drops in the range 1 Ã‚Âµm to 5 Ã‚Âµm.
For water applications, the seeding is typically polystyrene; polyamide or hollow glass spheres in the range 5 Ã‚Âµm to 100 Ã‚Âµm. Any particle that follows the flow satisfactorily and scatters enough light to be captured by the CCD camera can be used.
The number of particles in the flow is of some importance in obtaining a good signal peak in the cross-correlation. As a rule of thumb, 10 to 25 particle images should be seen in each interrogation area.
Double-pulsed particle images.
When the size of the interrogation area, the magnifications of the imaging and the light-sheet thickness are known, the measurement volume can be defined.
Spatial resolution and dynamic range
Setting up a PIV measurement, the side length of the interrogation area, dIA, and the image magnification, sâ„¢/s are balanced against the size of the flow structures to be resolved. One way of expressing this is to require the velocity gradient to be small within the interrogation area:
The highest measurable velocity is constrained by particles traveling further than the size of the inter-rogation area within the time, t. The result is lost correlation between the two image frames and thus loss of velocity information. As a rule of thumb:
When the size of the interrogation area, the magnification of the imaging and the light-sheet thickness are known, the measurement volume can be defined.
The third velocity component
In normal PIV systems, the third velocity component is â„¢â„¢invisibleâ„¢â„¢ due to the geometry of the imaging. This third velocity component can be derived by using two cameras in a stereoscopic arrangement.
Experimental set-up for stereoscopic PIV measurements of the flow behind a car model.
11. Computer Graphics Flow Visualization
Experimental flow visualization is a starting point for flow visualization using computer graphics. The process of computer visualization is described in general, and applied to CFD. The heart of the process is the translation of physical to visual variables. Fluid mechanics theory and practice help to identify a set of Ëœstandardâ„¢ forms of visualization To prepare the flow data to be cast in visual form, several types of operations may have to be performed on the data
The Flow Visualization Process
Scientific visualization with computer-generated images can be generally conceived as a three-stage pipeline process We will use an extended version of this process model here.
Â¢ Data generation:
Production of numerical data by measurement or numerical simulations. Flow data can be based on flow measurements, or can be derived from analysis of images obtained with experimental visualization techniques as described earlier, using image processing. Numerical flow simulations often produce velocity fields, sometimes combined with scalar data such as pressure, temperature, or density.
Â¢ Data enrichment and enhancement:
Modification or selection of the data, to reduce the amount or improve the information content of the data. Examples are domain transformations, sectioning, thinning, interpolation, sampling, and noise filtering.
translation of the physical data to suitable visual primitives and attributes. This is the central part of the process; the conceptual mapping involves the Ëœdesignâ„¢ of a visualization: to determine what we want to see, and how to visualize it. Abstract physical quantities are cast into a visual domain of shapes, light, colour, and other optical properties. The actual mapping is carried out by computing derived quantities from the data suitable for direct visualization. For flow visualization, an example of this is the computation of particle paths from a velocity field.
Â¢ Rendering: transformation of the mapped data into displayable images. Typical operations here are viewing transformations, lighting calculations, hidden surface removal, scan conversion, and filtering (anti-aliasing and motion blur).
Â¢ Display: showing the rendered images on a screen. A display can be direct output from the rendering process, or be simply achieved by loading a pixel file into a frame buffer; it may also involve other operations such as image file format translation, data (de)compression, and colour map manipulations. For animation, a series of precomputed rendered images may be loaded into main memory of a workstation, and displayed using a simple playback program.
The style of visualization using numerically generated data is suggested by the formalism
underlying the numerical simulations. The two different analytical formulations of flow: Eulerian and Lagrangian, can be used to distinguish two classes of visualization styles:
Â¢ Eulerian: physical quantities are specified in fixed locations in a 3D field. Visualization
tends to produce static images of a whole study area. A typical Eulerian visualization is an arrow plot, showing flow direction arrows at all grid points.
Â¢ Lagrangian: physical quantities are linked to small particles moving with the flow through an area, and are given as a function of starting position and time. Visualization often leads to dynamic images (animations) of moving particles, showing only local information in the areas where particles move.
Wake behind an automobile (tuft grid method)
Visualization with dye to study water flow in a river model
(Courtesy Delft Hydraulics)
In the preceding sections, we have reviewed different aspects of flow visualization mainly that of experimental visualization methods and an introduction have been made into the computer graphics flow visualization techniques. The connection between experimental and computer-aided flow visualization is now beginning to develop. The current strong demand for new flow visualization techniques, especially for large scale 3D numerical flow simulations, can only be satisfied by combining the efforts of fluid dynamics specialists, numerical analysts, and computer graphics experts. Additional knowledge will be required from perceptual and cognitive psychology, and artists and designers can also contribute to this effort.
Flow visualization will not be restricted to techniques for giving an intuitively appealing,
general impression of flow patterns, but will increasingly focus on more specific physical flow phenomena, such as turbulence, separations and reattachments, shock waves, or free liquid surfaces. Also, purely visual analysis of flow patterns will be increasingly complemented by algorithmic techniques to extract meaningful patterns and structures, that can be visualized separately.
Before they can be used as reliable research tools, the visualization techniques themselves
must also be carefully tested and validated. As we have seen in the previous sections, visualization involves a sequence of many processing steps, where approximations are frequently used and numerical errors can easily occur.
An important issue following development of visualization techniques, is the design and
implementation of flow visualization systems. Research in computer graphics flow visualization is still in its early stages, and especially 3D flow field visualization is still very much an open problem. At present, this is one of the great challenges of scientific visualization. This calls for a cooperative effort in the development of new
techniques at all stages of the flow visualization process.
1- Fluid Flow Visualization
Frits H. Post, Theo van Walsum
Delft University of Technology, The Netherlands*
Published in: Focus on Scientific Visualization, H. Hagen, H. MÃƒÂ¼ller, G.M. Nielson (eds.), Springer Verlag, Berlin, 1993, pp. 1-40 (ISBN 3-540-54940-4)
2- Accuracy Of Pressure Sensitive Paint AIAA Journal, Vol. 39, No.1, January 2001
3- 9th International Symposium On Flow Visualisation, 2000
Flow around a three-dimensional bluff Body
S. KrajnoviÃ‚Â´c 1and L. Davidson2
Compressible fluid flow YAHYA
Batchelor, G.K. (1967) An Introduction to Fluid Dynamics, Cambridge University Press
Elsevier publication Fluid Flow Visualization