OVERVIEW: DEVELOPMENT OF A BENCHMARK DATA SET

Một phần của tài liệu An experimental investigation of clocking effects on turbine aerodynamics using a modern (Trang 210 - 213)

CHAPTER 7 LPV CLOCKING MODELING AND EXPERIMENTAL INTEGRATION

B.1 OVERVIEW: DEVELOPMENT OF A BENCHMARK DATA SET

One of the main goals of this research program was the creation of a benchmark data set for use with future code development, and it is important to describe the wide variety of individual experimental conditions obtained. The creation of the benchmark data set needs to be done in conjunction with the current experimental investigations, and while the requirements of both are similar, there are some distinct differences that need to be noted and understood.

The most notable distinction is that for any experimental research investigation the goals are usually well defined (although the tasks may not be) and the data is analyzed and presented in a format that answers these specific goals. Using the LPV clocking data subset as an example, the data will be presented as a function of Reynolds

186

number holding corrected speed and pressure ratio constant. There is a chance that there may be other independent variables, or that other researchers would like to look at different effects. Often in these experimental programs the secondary effects (while present in the data set) are never fully investigated leaving only the answers to the main question (in this case, clocking as a function of Reynolds number holding corrected speed and pressure ratio constant). The goal is to return to these secondary issues, or more likely to have another researcher look at these issues. However, as time goes on,

recreating the details of a particular data set gets more and more difficult to do. Usually the main questions regarding the data are very well documented, but the secondary data is not so well done.

As a comparison, a benchmark data set requires that the data be well characterized for a variety of different tasks before it is stored and without the benefit of knowing what the main questions are up front. This sounds easy enough, until one realizes that many of the nuances of the facility, acquisition characteristics and even items such as data

presentation have to be thought out in the most general terms. The problem is usually compounded by the fact that the researcher often cannot foresee all the uses for the data they are archiving, and has to spend a great deal of time thinking about all the various interconnections in terms of providing a convenient way to store the data. To make matters worse, usually the mining of this data occurs many years after the fact, and sometimes the main participants in the original experiment are not even involved at this later time. The raw data over the entire experiment is too unwieldy to use as the

benchmark set itself. Some amount of preliminary data analysis/selection on the data set is needed to get to the subset that will be used for the detailed analysis. In this case, the analysis is done by creating matched time-windows and then processing all the sensors over those time windows. These sets of data will then be examined by looking at the time-average and the time-resolved components of the data. Comparing different sensors from different experimental conditions forms the basis of the data subs-sets for the analysis. One can think of the benchmark data set as being a collection of higher-level data (just the time average and time resolved data over these matched windows).

187

Of course, the requirements of modern research programs contain both parts, with archiving done of some of the data, as well as reporting done on specific research goals.

As facilities improve, more and more effort is spent creating benchmark data sets. It is a testament to the advance of the short-duration facility as an experimental device that more and more of the data taken in these facilities are considered to be worthy of being used to generate benchmark data.

This may seem like a non-issue, but experiences with data taken at Calspan in the 1990’s that was recently revisited showed how careful one had to be in the storage of the data. That data was stored only as a time average at a certain window, and thus the full time resolved nature of the data was not available. Thus the ability to redo some of the basic analysis with newer techniques at slightly different windows was essentially lost, even though the main acquired data had the resolution needed, making one of the primary heat-flux data sets limited in its usefulness for current code verification.

With these thoughts in mind, the benchmark dataset will be formed from the many experimental runs over time-windows that match the engine operation conditions:

corrected speed and pressure ratio. The experimental subset used in the analysis and presented in Chapter 5 and Chapter 6 will come from this dataset. The benchmark data set will consist of time-averaged data and various representations of the time-resolved data. Correct interpretation of this data requires an understanding of many topics including: time-scales, time-average vs., time-resolved data, the composition of time- resolved data, the uncertainty bands on the data, the derivation of the time-windows, and error propagation. After these are discussed, the main analysis techniques will be shown, and some of the models used to improve the basic measurements will be described.

Armed with this information, the data presented in the following chapters will be easier to understand.

Một phần của tài liệu An experimental investigation of clocking effects on turbine aerodynamics using a modern (Trang 210 - 213)

Tải bản đầy đủ (PDF)

(345 trang)