1. Trang chủ
  2. » Luận Văn - Báo Cáo

Embedded Graphics

32 220 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Nội dung

309 Chapter 9 Embedded Graphics In an attempt to provide a better user experience, electronic products these days provide a graphical user interface. The complexity of the interface depends on the product and its usage scenario. For instance, consider these devices: a DVD/MP3 player, a mobile phone, and a PDA. A DVD/MP3 player requires some form of primitive interface that is capable of listing the CD/ DVD contents. Creating playlists and searching tracks would be the most complex of operations that can be done on a DVD/MP3 player. Obviously this is not sufficient for a mobile phone. A mobile phone has more function- ality. The most complex requirement is in the PDA. One should be able to run almost all applications such as word processors, spreadsheets, schedulers, and the like on a PDA. One might have various questions regarding graphics support on embedded Linux. Ⅲ What comprises a graphics system? How does it work? Ⅲ Can I use Linux desktop graphics on embedded systems as is? Ⅲ What are the choices available for graphics on embedded Linux systems? Ⅲ Is there a generic solution that can address the entire range of embedded devices requiring graphics (i.e., mobile phone to DVD player)? This chapter is outlined to answer these questions. 9.1 Graphics System The graphics system is responsible for Ⅲ Managing the display hardware Ⅲ Managing one or more human input interface(s), if necessary 310 Embedded Linux System Design and Development Ⅲ Providing an abstraction to the underlying display hardware (for use by applications) Ⅲ Managing different applications so that they co-exist and share the display and input hardware effectively Regardless of operating systems and platforms, a generic graphics system can be conceptualized into different module layers as shown in Figure 9.1. The various layers are: Ⅲ Layer1 is the graphics display hardware and the input hardware, the essential hardware components in any graphics system. For example, an ATM kiosk has a touchscreen as both its input interface and display hardware, a DVD player has video output on the TV, and a front panel LCD has its display hardware and a remote control as input interface. Ⅲ Layer2 is the driver layer that provides interfacing with the operating system. Each operating system has its own interfacing mechanism and device manufacturers try to make sure that they provide drivers for all popular OSs. For example, the NVIDIA ® driver cards ship with video drivers for Linux and Windows. Ⅲ Layer3 consists of a windowing environment that is a drawing engine responsible for graphics rendering and a font engine responsible for font rendering. For example, drawing engines provide line, rectangle, and other geometric shape-drawing functionalities. Ⅲ Layer4 is the toolkit layer. A toolkit is built over a particular windowing environment and provides APIs for use by the application. Some toolkits are available over multiple windowing environments and thus provide application portability. Toolkits provide functions to draw complex controls such as buttons, edit boxes, list boxes, and so on. Ⅲ The topmost layer is the graphics application. The application need not always use a toolkit and a windowing environment. With some minimum Figure 9.1 Graphics system architecture. Display Hardware Input Hardware Device Drivers Windowing Environment Toolkit Layer 1 Layer 2 Layer 3 Layer 4 Layer 5 Graphics Applications Embedded Graphics 311 abstraction or glue layer it might be possible to write an application that directly interacts with the hardware via the driver interface. Also some applications such as a video player require an accelerated interface to bypass the graphics layer and interface with the driver directly. For such cases the graphics system provides special interfacing such as the famous Direct-X in Windows. Figure 9.2 compares layers across various operating systems. The chapter progressively discusses each layer in detail with respect to embedded Linux. 9.2 Linux Desktop Graphics—The X Graphics System The X Windowing System provides Linux desktop graphics. We use this as a case study to understand the layered architecture of a complete graphics solution. The X system is primarily written for the desktop computer. Desktop graphic cards follow predefined standards, VGA/SVGA. Input devices such as the mouse and keyboard input drivers also have standards. Hence a generic driver handles the display and input hardware. The X system implements a driver interface necessary to interact with PC display hardware. The driver interface isolates the rest of the X system from hardware-specific details. Figure 9.2 Graphics layers across operating systems. OS Specific Driver Layer Graphics Hardware Graphics Engine Layer Toolkit Layer Windows GDI Layer Windows Video Miniport Driver Symbian Graphics Driver Linux Frame- buffer Driver Win CE MFC/SDK Windows CE Symbian Embedded Linux Symbian Graphics API FLNX/ Qt-E/GTK -fb Nano-X/ Qt-E/GDK-fb Symbian Graphics Engine Graphics Applications on different OS 312 Embedded Linux System Design and Development The windowing environment in X has a client/server model. X applications are clients; they communicate with the server and issue requests and also receive information from the server. The X server controls the display and services requests from clients. Applications (clients) only need to know how to communicate with the server, and need not be concerned with the details of rendering graphics on the display device. This commutation mechanism (protocol) can work over any interprocess communication mechanism that provides a reliable octet stream. X uses sockets for the same: the result, X Protocol. Because X is based on sockets it can run over a network and can be used for remote graphics as well. X-clients use APIs provided by the X windowing system to render objects on the screen. These APIs are part of a library, X-lib, which gets linked with the client application. The X-Toolkit architecture is shown in Figure 9.3. It comprises APIs that provide windowing capabilities. Controls such as list boxes, buttons, check boxes, edit boxes, and the like are also windows built over the X-lib primitives and a collection of such libraries is called a widget/toolkit. The toolkit makes the life of the application programmer easy by providing simple functions for drawing controls. With multiple clients connecting to the server, there arises a need to manage different client windows. The X server provides a window manager, another X client, but a privileged one. The X architecture provides special functions for the window manager to perform, actions such as moving, resizing, minimizing or maximizing a window, and so on. For more details you can check the X11 official site, http://www.x.org. 9.2.1 Embedded Systems and X X is highly network oriented and does not directly apply over an embedded system. The reasons why X cannot be used in an embedded system are listed in the following. Figure 9.3 X toolkit architecture. X Server X Client Using Xlib X Toolkit X Applications X Protocol Embedded Graphics 313 Ⅲ X has mechanisms for exporting a display over a network. This is not required in an embedded system. The client/server model is not aimed at single-user environments such as the one on embedded systems. Ⅲ X has many dependencies such as the X font server, X resource manager, X window manager, and the list goes on. All the above increase the size of X and its memory requirements. Ⅲ X was written to be run on resource-full giant Pentium processor machines. Hence running X as-is on power/cycle savvy embedded microprocessors is not possible. The requirements for a graphics framework an on embedded system are: Ⅲ Quick/near real-time response Ⅲ Low on memory usage Ⅲ Small toolkit library (footprint) size Many modifications have been done to X, and microversions are available for running on embedded platforms. Tiny-X and Nano-X are popular and successful embedded versions based on the X graphics system. We discuss Nano-X in Section 9.6. 9.3 Introduction to Display Hardware In this section we discuss various graphics terminologies and also generic graphics hardware functions. 9.3.1 Display System Every graphics display system has a video/display controller, which is the essential graphics hardware. The video controller has an area of memory known as the frame buffer. The content of the frame buffer is displayed on the screen. Any image on the screen comprises horizontal scan lines traced by the display hardware. After each horizontal scan line the trace is moved down in the vertical direction, and traces the next horizontal scan line. Thus the whole image is composed of horizontal lines scanned from top to bottom and each scan cycle is called a refresh. The number of screen refreshes that happen in a second is expressed as the refresh rate. The image before being presented on the screen is available on the frame buffer memory of the controller. This digital image is divided into discrete memory regions called pixels (short for pictorial elements). The number of pixels in the horizontal and vertical direction is expressed as the screen resolution. For instance, a screen resolution of 1024 × 768 is a pixel matrix of 1024 columns and 768 rows. These 1024 × 768 pixels are transferred to the screen in a single refresh cycle. Each pixel is essentially the color information at a particular (row, column) index of the display matrix. The color information present at a pixel is denoted 314 Embedded Linux System Design and Development using a color representation standard. Color is either represented in the RGB domain, using Red, Green, and Blue bits, or in the YUV 1 domain, using luminance (Y) and chrominance (U and V) values. RGB is the common representation on most graphic systems. The bit arrangement and the number of bits occupied by each color results in various formats listed in Table 9.1. The number or range of color values to be displayed determines the number of bytes occupied by a single pixel, expressed by the term pixel width. For example, consider a mobile display unit of resolution 160 × 120 with 16 colors. The pixel width here is 4 bits per pixel (16 unique values best represented using 4 bits), in other words a ½ byte per pixel. The frame buffer memory area required is calculated using the formula Frame Buffer-Memory = Display Width * Display Height * Bytes-per-pixel. In the example discussed above the required memory area is 160 × 120 × (½) bytes. Most frame buffer implementations are linear, in the sense that it is a contiguous memory location, similar to an array. The start byte of each line is separated by the constant width of the bytes, called the line width or stride. In our example, 160 * (½) bytes is the line width. Thus the location of any pixel (x, y) = (line_width * y) + x. Figure 9.4 illustrates the location of pixel (40, 4), which is marked in bold. Now, look at the first three entries in Table 9.1. They are different from the remaining entries, in the sense that they are indexed color formats. Indexed formats assign indices that map to a particular color shade. For example, in a monochrome display system, with just a single bit, two values (0 or 1), one can map 0 to red and 1 to black. Essentially what we now have is a table with color values against their indices. This table is called a Color LookUp Table (CLUT). CLUTs are also called color palettes. Each palette entry maps a pixel value to a user-defined red, green, and blue intensity level. Now, with the CLUT introduced, note that the frame buffer contents get translated to different color shades based on the values in the CLUT. For instance, the CLUT can be intelligently used on a mobile phone to change color themes as shown in Figure 9.5. Table 9.1 RGB Color Formats Format Bits Red Green Blue Colors Monochrome 1 — — — 2 Indexed—4 bit 4 — — — 2^4 = 16 Indexed—8 bit 8 — — — 2^8 = 256 RGB444 12 4 4 4 2^12 RGB565 16 5 6 5 2^16 RGB888 24 8 8 8 2^24 Embedded Graphics 315 Figure 9.4 Pixel location in a linear frame buffer. Figure 9.5 CLUT mapping. 0 1 2 3 4 . . . 0 1 2 3 . . . 78 79 80 . . . 40 X Axis Pixel (40, 4) = (160 * 1/2 * 4) + 40 = 360 80 160 240 320 400 Total offset from base Y Axis Index R FF 00 00 FF 00 G 00 FF 00 FF 00 B 00 00 FF FF 00 1 2 3 4 5 Red Green Blue White Black White Blue Red Green Black 5 2 1 3 4 Framebuffer Memory Color Lookup Table Color Translation On Screen Pixels 316 Embedded Linux System Design and Development 9.3.2 Input Interface An embedded system’s input hardware generally uses buttons, IR remote, touchscreen, and so on. Standard interfaces are available on the Linux kernel for normal input interfaces such as keyboards and mice. IR remote units can be interfaced over the serial interface. The LIRC project is about interfacing IR receivers and transmitters with Linux applications. The 2.6 kernel has a well-defined input device layer that addresses all classes of input devices. HID (Human Interface Device) is a huge topic and discussing it is beyond the scope of this chapter. 9.4 Embedded Linux Graphics The previous section discussed hardware pertaining to embedded systems. In the next sections we cover in depth the various embedded Linux graphics components. Figure 9.6 provides a quick overview of the various layers involved. 9.5 Embedded Linux Graphics Driver The first frame buffer driver was introduced in kernel version 2.1. The original frame buffer driver was devised to just provide a console to systems that lack video adapters with native text modes (such as the m68k). The driver provided means to emulate a character mode console on top of ordinary pixel-based display systems. Because of its simplistic design and easy-to-use interface, the frame buffer driver was finding inroads in graphics applications on all types of video cards. Many toolkits that were essentially written for traditional X window systems were ported to work on a frame buffer interface. Soon new windowing environments were written from scratch targeting this new graphics Figure 9.6 Embedded Linux graphics system. Display Hardware Input Hardware Framebuffer Driver Input Driver Qt-E/Nano-X/ GDK-Fb/Direct FB Qt-E/FLNX/ GTK-Fb Layer 1 Layer 2 Kernel User space Layer 3 Layer 4 Layer 5Graphics Applications Embedded Graphics 317 interface on Linux. Today, the kernel frame buffer driver is more of a video hardware abstraction layer that provides a generic device interface for graphics applications. Today almost all graphical applications on embedded Linux systems make use of the kernel frame buffer support for graphics display. Some of the reasons for wide usage of the frame buffer interface are: Ⅲ Ease of use and simple interface that depend on the most basic principle of graphics hardware, a linear frame buffer Ⅲ Provides user-space applications to access video memory directly, immense programming freedom Ⅲ Removes dependency on legacy display architecture, no network, no client- server model; simple single-user, direct display applications Ⅲ Provides graphics on Linux without hogging memory and system resources 9.5.1 Linux Frame Buffer Interface The frame buffer on Linux is implemented as a character device interface. This means applications call standard system calls such as open() , read() , write() , and ioctl() over a specific device name. The frame buffer device in user space is available as /dev/fb[0-31] . Table 9.2 lists the interfaces and the operations. The first two operations on the list are common to any other device. The third one, mmap , is what makes the frame buffer interface unique. We go slightly off track now to discuss the features of the mmap() system call. The Power of mmap Drivers are a part of the kernel and hence run in kernel memory, whereas applications belong to user land and run in user memory. The only interface available to communicate between drivers and applications is the file opera- tions (the fops ) such as open , read , write , and ioctl . Consider a simple write operation. The write call happens from the user process, with the data placed in a user buffer (allocated from user-space memory) and is passed over to the driver. The driver allocates a buffer in the kernel space and copies the user buffer to the kernel buffer using the copy_from_user kernel function and does the necessary action over the buffer. In the case of frame buffer drivers, there is a need to copy/DMA it to actual frame buffer memory for Table 9.2 Frame Buffer Interface Interface Operation Normal I/O Open, read, write over /dev/fb Ioctl Commands for setting the video mode, query chipset information, etc. Mmap Map the video buffer area into program memory 318 Embedded Linux System Design and Development output. If the application has to write over a specified offset then one has to call seek() followed by a write() . Figure 9.7 shows in detail the various steps involved during the write operation. Now consider a graphics application. It has to write data all over the screen area. It might have to update one particular rectangle or sometimes the whole screen or sometimes just the blinking cursor. Each time performing seek() , followed by a write() is costly and time consuming. The fops interface provides the mmap() API for use in such applications. If a driver implements mmap() in its fops structure then the user application can directly obtain the user-space memory-mapped equivalent of the frame buffer hardware address. mmap() implementation is a must for the frame buffer class of drivers. 2 Figure 9.8 shows the various steps when mmap is used. Figure 9.7 Repeated seek/write. Figure 9.8 mmaped write. Open (device name) Seek (offset) Write (bytes) Driver open, Init Hardware Seek to Buffer Offset Write/DMA to Framebuffer User Space Application Kernel Driver Repeat seek/write Open (device name) Mmap Mmapped user Spacebuffer Driver open, Inithardware Mmap Framebuffer User Space Application Kernel Driver Video Memory [...]... layers PicoGUI www.picogui.org LGPL A new graphic user interface architecture designed with embedded systems in mind; includes low-level graphics and input, widgets, themes, layout, font rendering, network transparency Qt /Embedded www.trolltech.com/products/ embedded/ index.html QPL GPL C++-based windowing system for embedded devices, provides most of the Qt API GTK+/FB www.gtk.org LGPL Frame buffer port... provides a simple but powerful embedded graphic programming interface The prime features in Nano-X that make it suitable as an embedded Windowing environment are listed below Ⅲ Developed from scratch targeting embedded devices, accounting for various constraints such as memory and footprint The entire library is less than 100 K and only uses 50 to 250 K of runtime memory 336 Embedded Linux System Design... modular and has primarily three layers These are: Embedded Graphics 337 Nano-X Client Application Mwindows Application Nano-X API Win 32-like API Twin API Layer Device Independent Graphics Engine Mouse & Touchpad Screen display Keyboard & Buttons Device Driver Layer Figure 9.10 Nano-X windowing system architecture Ⅲ Device driver layer Ⅲ Device-independent graphics engine Ⅲ API layer (Nano-X and Microwindows)... on Nano-X without many code changes 9.7 Conclusion The chapter addressed various queries about graphics systems and their architecture and options available on Linux embedded systems One question remains unanswered: is there a generic solution that can address the entire range of embedded devices requiring graphics, that is, mobile phones to DVD players? The Linux frame buffer provides a solution for... existence on desktop platforms for many years These libraries abstract the driver interface over simpler APIs that make sense for a graphics application programmer These libraries are essential in all windowing environments A generic windowing environment consists of: Embedded Graphics Listing 9.2 329 Frame Buffer Driver Hardware-Specific Definitions /* sfb.h */ #define #define #define #define #define #define... window is displayed on the screen using the GrMapWindow() Mapping the window to Embedded Graphics Listing 9.4 339 Sample Nano-X Application /* nano_simple.c */ #define MWINCLUDECOLORS #include #include "nano-X.h" int main(int ac,char **av) { GR_WINDOW_ID w; GR_EVENT event; if (GrOpen() < 0) { printf("Can't open graphics\ n"); exit(1); } /* * GrNewWindow(GR_ROOT_WINDOW_ID, X, Y, Width, Height,... break; SFB_8BPP; case 16: *((unsigned int*)(SFB_MODE_REG)) = break; SFB_16BPP; Embedded Graphics 331 Listing 9.2 Frame Buffer Driver Hardware-Specific Definitions (continued) case 32: *((unsigned int*)(SFB_MODE_REG)) = break; } SFB_32BPP; } Ⅲ An interface layer for low-level drivers, such as the screen and input drivers Ⅲ A graphics engine for drawing objects on the screen Ⅲ A font engine that is capable.. .Embedded Graphics 319 All frame-buffer applications simply call open() of /dev/fb, issue necessary ioctl() to set the resolution, pixel width, refresh rate, and so on, and then finally call mmap() The mmap... or unalterable properties of a graphics card when set to work in a particular resolution/mode struct fb_fix_screeninfo { char id[16]; unsigned long smem_start; u32 smem_len; u32 type; u32 visual; /*Identification string ex "ATI Radeon 360"*/ /*Start of frame buffer memory*/ /*Length of frame buffer memory*/ /*one of various FB_TYPE_XXX */ /*one of FB_VISUAL_XXX*/ 320 Embedded Linux System Design and... and hence make porting possible Qt is a cross-platform toolkit available on various platforms such as Qt/Windows (Windows XP, 2000, NT 4, Me/98/95), Qt/X11 (X windows), Qt/Mac (Mac OS X), and Qt /Embedded (embedded Linux) Ⅲ APIs exported by windowing environment libraries perform simple tasks Toolkits implement many GUI components/objects and provide APIs for them For example, toolkits provide APIs . questions regarding graphics support on embedded Linux. Ⅲ What comprises a graphics system? How does it work? Ⅲ Can I use Linux desktop graphics on embedded systems. available for graphics on embedded Linux systems? Ⅲ Is there a generic solution that can address the entire range of embedded devices requiring graphics (i.e.,

Ngày đăng: 06/10/2013, 23:20

Xem thêm

w