LLVM Scheduling

LLVM is a compiler project which start as a research at University of Illinois and now has grow to become a massive project and involve in both commercial and open source, as well as research. LLVM is licensed under “UIUC” BSD-Style license.

Recently, the latest Apples also start using LLVM in their recent release Xcode 4. LLVM is said to be twice faster than gcc, easy to expand, easy to optimize, and and designed for today’s modern chip architectures.

Hence, Aeste chose LLVM for its AEMB. Here comes my main task here at Aeste. I was asked to write a new LLVM compiler backend for AEMB. One of my major concern here is that AEMB running in two threads, which involve code instructions scheduling, and thats what I am going to talk about now.

When someone ever wants to write a new backend for LLVM, most of the tasks that need to be done will be insidellvm/lib/Target directory. LLVM: LLVM: llvm/lib/Target/ Directory Reference.

In case of scheduling, one may check first *Schedule.td files in existing backends in order to understand how LLVM do the code scheduling. ‘td files’ is called tablegen files, will be use mostly by LLVM target independent code generator. *Schedule.td files will give LLVM an idea of what functional units you have, their latency, and which instructions occupy which functional units.

It’s not perfect, but it gives the scheduler a decent idea of how much slack it needs to put between dependent instructions to hide functional unit latency. *Schedule.td is your input as the backend writer, that will produces tablegen-generated output from that files and uses it by LLVM to actually do the scheduling.

Inside *Schedule.td;

  1. You must define functional units that available across chip sets for the target, in my case, for AEMB. Each of functional units will be treated as resources during scheduling works and will affect instruction order base on their availability during time interval.
  2. In case you want forward results of instructions, you may use pipeline bypass/forwarding class.
  3. Now instructions itineraries class used by your target should be define using Instruction Itinerary  classes interfaces define by LLVM. In this class also, number of micro-operations each instructions decoded to can be define.
  4. There is instruction stage class that represents non-pipelined step in  the execution of an instructions such as how many cycles it takes, choices of functional units and discrete time slots needed.
  5. Instruction itinerary data which basically a runtime map of an insctruction. and Lastly, Processor itineraries representing the sets of all itinerary classes for a given  target’s chip set.

You may also like...

Leave a Reply