As meter clusters indicate a wide range of information, the display resolutions are increasing. Such large displays need to perform at high speeds and process large volumes of data. A graphics processing unit (GPU) is therefore necessary for these large displays, as central processing units (CPU) alone are unable to handle the large volumes of data. Even the cars of same platform have various types of displays. The designer should implement two kinds of codes: CPU-code and GPU-code, and it is a time consuming task and a risk of introducing a human error also increases. In this paper, we propose an image-based design process where a CPU-code is ported into a GPU-code. This process does not analyze the code itself, but analyzes the output images of the display system. During CPU-code porting, two kinds of human error impact design efficiency. One is a lack of logical design, and the other mistaking operations for fixed points. Owing to human error, information does not appear in the right position on the display. The information displayed contains thousands of categories, which is difficult for a human to validate through judgment of the porting process. The proposed process executes CPU-code and GPU-code simultaneously, and automatically evaluates two display outputs. The process is executed on a PC in the design phase and is executed in CPUs and GPUs in the experiment phase. The logic design is validated in the design phase, while the timing of signals is validated in the experiment phase. Since messages drawn on the display vary according to the resolution of the display, the output is different even if the same input is provided to the CPU-code and GPU-code. Instead of comparing the output screen images directly, the process involves image processing to obtain features using corner detection from the screen images output by the CPU-code and the GPU-code, respectively, and the obtained features are compared. The Features from Accelerated Segment Test (FAST) image processing algorithm is used for corner detection. FAST is used to detect a corner in camera images at high speed. However, as camera images change every moment, its corner detection is not stable. As the RGB level is fixed to the screen output (unlike a camera image), the process performs the stable corner detection. In the design phase, the process reads the RGB level of the PC’s VRAM directly. In the experiment phase, an FPGA hardware circuit inputs the RGB signal of the actual circuits (CPU-circuit and GPU-circuit) and compares each feature of corner detection. The process not only improves the porting, but also contributes to design efficiency. This process can even use the same code for the inspection of code from 30 countries and can manage specification generation. Furthermore, during changes in the letters caused by changes in specifications, the system can compare the characteristics before and after the specifications change automatically. The proposed process achieves an increase in the design efficiency by a factor of 3.
Prof. MASATOSHI ARAI, Marelli Corporation/Saitama University, JAPAN Prof. Dr. Kazuhito Ito, Saitama University, JAPAN