As I mentioned in my previous post, my task for this week is to study the remoteproc framework. The goal is to discover how it works because later, I’ll be implementing about the same thing. Because of the limited resource and information on this specific framework, I had to start my research using the linux kernel (3.9.6) source code.
I had to go through several C files in order to understand the said framework: remoteproc.h, remoteproc_core.c, remoteproc_elf_loader.c. Most of them contain functions (or API) that enable programmers to develop their own platform-specific remote processor (different processors need to be controlled differently). Several important functions that can be found:
rproc_boot()which enable users to boot up their processor
rproc_shutdown()which turns off the remote processor and deallocates all allocated memory for the processor.
Because the framework is generalized, these functions normally contain callbacks to platform-specific functions, rproc->ops->functions() (which are defined in the platform-specific driver) to ensure a working remote processor. What we can conclude from this framework is that its purpose is to allow the kernel to control a remote processor. Based from the example of a remoteproc project by Texas Instrument, only one platform-specific driver is needed to control a remote processor. With this being said, it is safe to say that we can use whatever means we want to control the processor because everything depends on our platform-specific driver (if we wanted to use Ethernet, we just have to send some specific commands to control the processor).
Now we know how to control our processor, but how about the communication with between the remote processor and the kernel? After some researches, I found the RPMsg developed by TI that works as a communication bus between the remote processor and the kernel. Rpmsg is a virtio based communication bus, and from what I understand, it uses a shared memory (I saw some memcpy() in a firmware loading API and in some other operations too) so that the kernel can communicate with the remote processor. After reading the rpmsg.h and virtio_rpmsg_bus.c, I still can’t find where the communication interface is defined! All I found in those files are APIs to send a message, to register or unregister an rpmsg driver and etc. These functions lead me nowhere in figuring out the communication interface with the remote processor. For example, the rpmsg_send_offchannel_raw() API takes in (through function parameters) the destination address (which I assume the shared memory) where we want to send the message to, and some other parameters. By examining the rproc_handle_carveout() function in the remoteproc_core.c, we can see that a firmware will normally request for a contiguous memory region. But I’m still not sure which ports are mapped to this memory region.
My last bet is the virtIO framework which is heavily used by the rpmsg. I think I might need to study the virtIO framework in order to fully understand the whole remote processor framework. Although the virtIO is not compulsory (after reading the rpmsg documentation because some remote processors don’t need virtIO device), I still think it would give me a clearer picture of how everything works out.
I in the meantime have started doing some readings on the “linux device driver” so that I have a better understanding of how a device works with the kernel. This will also help me in the remote processor driver development which I will commence later.
To conclude, I still have some analysis that need to be done in order to figure out the possible interface between a remote processor and the linux kernel. If we assume that the Ethernet can be used to interface with a remote processor, a special driver would be needed on the host (our main system) and client (remote processor) to translate all the Ethernet commands into a usable instructions to the remote processor. In the end, it will act more or less like the Network File System (NFS) but with a special driver to decode commands.