“Compilers are programs that convert computer code written in high-level languages intelligible to humans into low-level instructions executable by machines. But there’s more than one way to implement a given computation, and modern compilers extensively analyze the code they process, trying to deduce the implementations that will maximize the efficiency of the resulting software. Code explicitly written to take advantage of parallel computing, however, usually loses the benefit of compilers’ optimization strategies. That’s because managing parallel execution requires a lot of extra code, and existing compilers add it before the optimizations occur. The optimizers aren’t sure how to interpret the new code, so they don’t try to improve its performance. At the Association for Computing Machinery’s Symposium on Principles and Practice of Parallel Programming next week, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory will present a new variation on a popular open-source compiler that optimizes before adding the code necessary for parallel execution.”
Related Content
Related Posts:
- Cobalt-free batteries could power cars of the future
- Researchers 3D print components for a portable mass spectrometer
- A blueprint for making quantum computers easier to program
- MIT researchers discover “neutronic molecules”
- MIT scientists tune the entanglement structure in an array of qubits
- New software enables blind and low-vision users to create interactive, accessible charts
- Researchers 3D print key components for a point-of-care mass spectrometer
- Self-powered sensor automatically harvests magnetic energy
- This 3D printer can figure out how to print with an unknown material
- With inspiration from “Tetris,” MIT researchers develop a better radiation detector