ffmpeg vp8 decoder
Mans Rullgard has improved the original neon vp8 patches, and pushed them into the main tree:
omap drm/kms display driver
A while back I started experimenting with the DRM display driver framework, and now have a basic driver which implements the KMS part of DRM. It uses a plugin API for SGX/PVR driver to register and handle it's own set of ioctls related to 2d/3d acceleration. Still TBD is overlay support, and cleaner way to handle buffer allocation (GEM?).
Now with the userspace pvr xorg driver, basic XRandR is working (change resolution, setup multi-monitor virtual display, etc). Being able to change resolution without cryptic sysfs cmds is nice for a change.
universal buffer allocation/management BoF
There was a BoF at ELC last week on the topic of common buffer allocation/management APIs to support zero copy buffer passing between various IP blocks (display, GPU, codecs, ISP, etc). Currently each SoC vendor has some custom API (CMEM, PMEM, NVMEM, TILER.. etc). Google is introducing ION. Most of the rest of the linux world (ie. desktop) uses GEM and/or TTM, which admittedly are somewhat GPU-centric.
In the desktop world, 3d/codec accelerators and display are all on the graphics card. But in the embedded/SoC world, you might have several vendors who use a common 3d block (for example), but each with their own unique display controller. And different video encode/decode accelerators. And different ISPs.. and so on.
For me, right now GEM is interesting as a way to expose allocating of TILER buffers on OMAP4 for video encode/decode and display. DRI already provides a path in userspace to pass GEM buffers and use DRM to handle the authentication duties (although GEM/DRM are perhaps not strictly required.. but they are something that exists in upstream kernel tree today). But short of mapping buffers into userspace process, there is currently no good way to pass these buffers to a v4l2 camera, or IVAHD video encoder/decoder. Possibly interface to video encoder/decoder IP can be thru the DRM display driver (another plugin, perhaps). Although that still leaves camera.
For me, right now GEM is interesting as a way to expose allocating of TILER buffers on OMAP4 for video encode/decode and display. DRI already provides a path in userspace to pass GEM buffers and use DRM to handle the authentication duties (although GEM/DRM are perhaps not strictly required.. but they are something that exists in upstream kernel tree today). But short of mapping buffers into userspace process, there is currently no good way to pass these buffers to a v4l2 camera, or IVAHD video encoder/decoder. Possibly interface to video encoder/decoder IP can be thru the DRM display driver (another plugin, perhaps). Although that still leaves camera.
And there was also a bit of discussion on the related topic of how to expose display to userspace.. fbdev is ancient legacy, v4l2 MCF is the new kid on the block, but DRM/KMS is what is used in the desktop world. It seems like MCF should be more flexible for building different sorts of graphs, and to handle oddball features like writeback-pipe on OMAP4. Although DRM/KMS is already handling hotplug, EDID parsing, and provides sufficient flexibility for building display graphs (fb -> crtc -> encoder -> connector). At this point I prefer sticking with DRM/KMS for mode setting so that normal uses can be exposed to userspace in normal ways.
At this point, it isn't clear what the conclusion will be. A more modularized DRM with buffer management more easily split out (or at least shared with other devices)? ION or GEM or some merger of the two? The BoF was just a short 1hr session to better define the problem. The next step will be follow up sessions during the Linaro Developer Summit in Budapest.