Main / Graphics
How Display Processing Happens
Linux Splash ScreenCreate the new image as a .png using gimp. Download a couple tools to help convert that to a .ppm file. Then save it off in the kernel tree under drivers/video/logo dir. Haven't figured out how to know which of the .ppm there the kernel is actually using by the source, but you can narrow it down by opening them up in gimp. You can rename your new one to the one its using and rebuild the kernel. Intro to UIThe X Window System (X11, X, and sometimes informally X-Windows) is a windowing system for bitmap displays, common on UNIX-like operating systems. X provides the basic framework for a GUI environment: drawing and moving windows on the display device and interacting with a mouse and keyboard. X does not mandate the user interface — this is handled by individual programs. Linux has no "native window system" since X is not part of the OS, but X Window is the nearest thing (X11, Xlib) and it seems that it might be referred to as such regularly. Your other option as a native system is probably direct frame buffer, FB or DFB. It appears that DFB is an alternative to X11 that actually is on top of FB rather than replacing FB. In any OpenGL application you need and rendering context whether you render directly to the display frame buffer or through a GLUT, XWindows, egl application interface. Now GLUT is a standardized interface that is usually a single instance that can go direct OpenGL or OpenGL going through a XWindows layer. ES is the embedded version, a subset. Allows new feature addition, new HW, via extensions. Does most of what OpenGL does. OpenGL SC is the safety critical version which seems to dominate aviation space. GAL = graphics abstraction layer and HAL = hardware abstraction layer. The low level device driver in the iMX6 instance is /dev/galcore and communicates with the 3D engine. To create a 3D OpenGL ES 1.0 app you link libEGL-fb.so and libGLESv1_CM.so. #if defined(_WIN32) || defined(__VC32__) && !defined(__CYGWIN__) && !defined(__SCITECH_SNAP__) /* Win32 and Windows CE platforms. */ #include <windows.h> typedef HDC EGLNativeDisplayType; typedef HWND EGLNativeWindowType; typedef HBITMAP EGLNativePixmapType; #elif defined(__linux__) && defined(EGL_API_DFB) && !defined(__APPLE__) #include <directfb.h> typedef struct _DFBDisplay * EGLNativeDisplayType; typedef IDirectFBWindow * EGLNativeWindowType; typedef struct _DFBPixmap * EGLNativePixmapType; #if ANDROID_SDK_VERSION >= 9 #include <android/native_window.h> typedef struct ANativeWindow* EGLNativeWindowType; typedef struct egl_native_pixmap_t* EGLNativePixmapType; typedef void* EGLNativeDisplayType; #else struct android_native_window_t; typedef struct android_native_window_t* EGLNativeWindowType; typedef struct egl_native_pixmap_t * EGLNativePixmapType; typedef void* EGLNativeDisplayType; #endif #elif defined(__linux__) || defined(__APPLE__) /* X11 platform. */ #include <X11/Xlib.h> #include <X11/Xutil.h> typedef Display * EGLNativeDisplayType; typedef Window EGLNativeWindowType; #else #error "Platform not recognized" /* VOID */ typedef void * EGLNativeDisplayType; typedef void * EGLNativeWindowType; typedef void * EGLNativePixmapType; EGL is an interface between Khronos rendering APIs (such as OpenGL, OpenGL ES or OpenVG) and the underlying native platform windowing system. EGL handles graphics context management, surface/buffer binding, rendering synchronization, and enables "high-performance, accelerated, mixed-mode 2D and 3D rendering using other Khronos APIs."[2] EGL is managed by the non-profit technology consortium Khronos Group. EGL is the specification that specifies how we provide the frame buffer to Graphics hardware, so it renders the 3D content on to it. That means, EGL dictates how we configure OpenGL. EGL 1.4 is the latest specification as of 2013, and it supports OpenGL, OpenGL ES, both 2.0 and 3.0 and OpenVG. EGL is the bridge between OpenGL ES and the underlying platform window system. It also provides interface to create graphics contexts, memory buffers and a mechanism to copy between platform surfaces and user accessible memory buffers in synchronized fashion. EGL is a portable graphics layer = middleware EGL = context renderer or windowing capability GLUT - The OpenGL Utility Toolkit GLUT (pronounced like the glut in gluttony) is the OpenGL Utility Toolkit, a window system independent toolkit for writing OpenGL programs. It implements a simple windowing application programming interface (API) for OpenGL. GLUT makes it considerably easier to learn about and explore OpenGL programming. GLUT provides a portable API so you can write a single OpenGL program that works across all PC and workstation OS platforms. GLUT is designed for constructing small to medium sized OpenGL programs. While GLUT is well-suited to learning OpenGL and developing simple OpenGL applications, GLUT is not a full-featured toolkit so large applications requiring sophisticated user interfaces are better off using native window system toolkits. GLUT is simple, easy, and small. The GLUT library has both C, C++ (same as C), FORTRAN, and Ada programming bindings. The GLUT source code distribution is portable to nearly all OpenGL implementations and platforms. The current version is 3.7. Additional releases of the library are not anticipated. OpenVG is an API designed for hardware-accelerated 2D vector graphics. Its primary platforms are mobile phones, gaming & media consoles and consumer electronic devices. It was designed to help manufacturers create more attractive user interfaces by offloading computationally intensive graphics processing from the CPU onto a GPU to save energy. OpenVG is well suited to accelerating Flash and mobile profile of SVG sequences. The OpenGL ES library provides similar functionality for 3D graphics. Khronos provides an egl.h for EGL types (e.g. EGLContext), enumerations, config attributes, bitmasks, function declarations (e.g. eglCreateWindowSurface), etc. It references an eglplatform.h which in turn calls on the platform-specific header file like that above. GL DefinesGL_QUADS also doesn't exist in OpenGL ES. You'll need to use some sort of triangle mode instead. Since you're only drawing one quad at a time, you can change GL_QUADS to GL_TRIANGLE_FAN and use the same set of vertices. It sounds like modern GPUs only want triangles anyway so there is a movement away from the quads. Screen ColorsHow to set up the screen and put up colors. int frame = 0; //this makes the image take up the whole screen GLfloat vert[] = { -1.0f, -1.0f, -1.0f, -1.0f, 1.0f, -1.0f, 1.0f, 1.0f, -1.0f, 1.0f, -1.0f, -1.0f }; //changing these values changes the screen color GLfloat col[] = { 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0 }; F_InitializeWindow(); //separate function that sets up the window using EGL calls glClearColor(0.0, 0.0, 0.0, 0.0); glEnableClientState(GL_COLOR_ARRAY); //basically tells the display how it will be used glVertexPointer(3, GL_FLOAT, 0, vert); //gets the vert values (image size) to use for rendering glColorPointer(4, GL_FLOAT, 0, col); //gets the color values array while(frame < 10000) { glClear(GL_COLOR_BUFFER_BIT); glDrawArrays(GL_TRIANGLE_FAN, 0, 4); eglSwapBuffers(f_dpy, f_surface); //post color buffer to native window frame++; if (frame % 1000 == 0) { printf("Frame: %d\n", frame); } } glDisableClientState(GL_COLOR_ARRAY); return 0; } The glColorPointer function by itself does not access the array. It merely saves the given parameters in OpenGL internal state machine. The data in the array are only accessed when you call glDrawArrays or glDrawElements. Only then is the color of each drawn vertex read from the location you give in glColorPointer. Problem SolvingWhat's up with the scissor not working on embedded target? http://gamedev.stackexchange.com/questions/40704/what-is-the-purpose-of-glscissor How to spit out random junk to test a frame buffer display: cat /dev/urandom > /dev/fb0 Display Drivershttp://www.embeddedlinux.org.cn/EssentialLinuxDeviceDrivers/final/ch12lev1sec3.html From UBoot notes for Sabre/Nitrogen ARM boards U-Boot support for the following displays is configured by default: HDMI - 1024 x 768 for maximum compatibility Hannstar-XGA - 1024 x 768 LVDS (Freescale part number MCIMX-LVDS1) wsvga-lvds - 1024 x 600 LVDS (Boundary p/n Nit6X_1024x600) wvga-rgb - 800 x 480 RGB (Boundary p/n Nit6X_800x480) Since the ipuv3_fb display driver currently supports only a single display, this code auto-detects panel by probing the HDMI Phy for Hot Plug Detect or the I2C touch controller of the LVDS and RGB displays in the priority listed above. Setting 'panel' environment variable to one of the names above will override auto-detection and force activation of the specified panel. You must specify the video mode in the kernel bootargs that Uboot passes in. Example console=ttymxc1,115200 consoleblank=0 video=mxcfb0:dev=lcd,512x128M@60,if=RGB666 rootwait root=/dev/mmcblk0p1 If you don't set consoleblank, Linux puts the screen into power-save mode. You can either specify a resolution, or a string which names a specific setting struct in the driver. The if parameter is the color encoding. 666 means 6 bits per color, for 18 total. Instead of RGB888 you would use RGB24. The framebuffer (FB) and graphics driver can use 16 bpp (RGB 565) storage for each pixel. The line in the bootargs if=RGB666 tells the post FB logic to convert from 16 to 18 bits for the display controller. If you place bpp=32 in your bootargs line, then it will convert 32 bit FB to 18 bit display. There is a tool called fbset that lets you change framebuffer driver settings. Not sure where the default comes from on boot, but the kernel driver reads the resolution passed in and may set things up based on that. The Linux framebuffer documentation does a little explaining. The margins are the same as porches. Left margin = horizontal back porch, right margin = horiz front porch. Upper margin = vertical back porch, lower margin = vertical front porch. It looks like the "sync length" may also be called the pulse width, or de-asserted portion of the H and V sync. The asserted portion is made up of the porches on either end, and the resolution size in the center. A data valid is asserted during the resolution width. The pixel length is just the period of the pixel clock, as one pixel value is clocked in per tick. Each horizontal line of pixels is clocked in one at a time to complete the vertical picture, so the vertical sync is also a frame sync. The retrace time is the sync length + back porch + front porch. This is everything except the dp, or display-on time, which is also the resolution value. In include/linux/mxcfb.h and include/linux/fb.h there are many FB settings related to these timings, which can be assigned to the sync value in the fb_videomode struct. e.g. FB_SYNC_VERT_HIGH_ACT, which had to be set for a COTS display we used. If the assertion of vsync is inverted, you'll get an image that wraps around the display bottom to top. /* 512x128 @ 55 Hz , pixel clk @ 25MHz */ "CDH-RGB", 55, 512, 128, 40000, 48, 144, 128, 4, 192, 250, FB_SYNC_CLK_LAT_FALL | FB_SYNC_VERT_HIGH_ACT, FB_VMODE_NONINTERLACED, 0,}, For ARM you can look at the available displays at arch/arm/mach-mx6/board-<type>.c You can change the pixel clock period setting to get different screen refresh rates. GPU Kernel DriverThe iMX6 uses a GPU core designed by Vivante. Contiguous memory is the shared mem between CPU and GPU. Parameters with examples for the Vivante core driver are:
ARM Pad ControlNote that the i.MX6 IPU LCD interface looks better on screen when the control lines and data lines are at different drive strengths for some reason - have used 40ohm for control and 120ohm for data. Links to Learnhttps://wiki.tizen.org/wiki/Porting_Guide/Graphics_and_UI https://people.freedesktop.org/~marcheu/linuxgraphicsdrivers.pdf |