just catching up on some news since last posting:
First, TI/OMAP PPA for ubuntu 11.10 now contains support for hw video codecs via DCE and gst-ducati. (decoders: h264, mpeg4, mpeg2, vc1; encoders: h264, mpeg4). Yah!
But lately I've been mostly working on omapdrm, a DRM/KMS display driver for omap, corresponding X11 driver (xf86-video-omap). The kernel driver is now queued up in the staging tree for 3.3. But not forgetting multimedia, I've been also working (as a linaro assignee) on extending dri2 protocol for more efficient video rendering (see linux-video.pdf) and UMM/dmabuf for sharing buffers between multiple devices (camera+drm, or multiple drm devices for a prime type setup).
And lastly, been doing some hacking trying to get xbmc working nicely with the hw video codecs for hw accel hd playback.. but more on that shortly when I have something work.
Sunday, November 27, 2011
Thursday, June 23, 2011
Building DCE firmware
Now, thanks to public release of codec libraries, and after some slacking on my part, all the bits and pieces needed to build your very own ducati (cortex-m3) firmware are available. I've put together a wiki page with instructions of where to find all the pieces, and how to build, for 2.6.38 kernel:
http://www.omappedia.org/wiki/DistributedCodecEngine
This is using syslink-2.0, tiler-2.0, and GA codecs/FC/etc.. but now at least, if you want to use a kernel with a different version of syslink, you can rebuild the firmware with appropriate corresponding bios-syslink on the coprocessor side.
http://www.omappedia.org/wiki/DistributedCodecEngine
This is using syslink-2.0, tiler-2.0, and GA codecs/FC/etc.. but now at least, if you want to use a kernel with a different version of syslink, you can rebuild the firmware with appropriate corresponding bios-syslink on the coprocessor side.
Sunday, April 17, 2011
better late than never
haven't had time to post for a while, so just getting caught up on a few things (in reverse chronological order)
And there was also a bit of discussion on the related topic of how to expose display to userspace.. fbdev is ancient legacy, v4l2 MCF is the new kid on the block, but DRM/KMS is what is used in the desktop world. It seems like MCF should be more flexible for building different sorts of graphs, and to handle oddball features like writeback-pipe on OMAP4. Although DRM/KMS is already handling hotplug, EDID parsing, and provides sufficient flexibility for building display graphs (fb -> crtc -> encoder -> connector). At this point I prefer sticking with DRM/KMS for mode setting so that normal uses can be exposed to userspace in normal ways.
At this point, it isn't clear what the conclusion will be. A more modularized DRM with buffer management more easily split out (or at least shared with other devices)? ION or GEM or some merger of the two? The BoF was just a short 1hr session to better define the problem. The next step will be follow up sessions during the Linaro Developer Summit in Budapest.
ffmpeg vp8 decoder
Mans Rullgard has improved the original neon vp8 patches, and pushed them into the main tree:
omap drm/kms display driver
A while back I started experimenting with the DRM display driver framework, and now have a basic driver which implements the KMS part of DRM. It uses a plugin API for SGX/PVR driver to register and handle it's own set of ioctls related to 2d/3d acceleration. Still TBD is overlay support, and cleaner way to handle buffer allocation (GEM?).
Now with the userspace pvr xorg driver, basic XRandR is working (change resolution, setup multi-monitor virtual display, etc). Being able to change resolution without cryptic sysfs cmds is nice for a change.
universal buffer allocation/management BoF
There was a BoF at ELC last week on the topic of common buffer allocation/management APIs to support zero copy buffer passing between various IP blocks (display, GPU, codecs, ISP, etc). Currently each SoC vendor has some custom API (CMEM, PMEM, NVMEM, TILER.. etc). Google is introducing ION. Most of the rest of the linux world (ie. desktop) uses GEM and/or TTM, which admittedly are somewhat GPU-centric.
In the desktop world, 3d/codec accelerators and display are all on the graphics card. But in the embedded/SoC world, you might have several vendors who use a common 3d block (for example), but each with their own unique display controller. And different video encode/decode accelerators. And different ISPs.. and so on.
For me, right now GEM is interesting as a way to expose allocating of TILER buffers on OMAP4 for video encode/decode and display. DRI already provides a path in userspace to pass GEM buffers and use DRM to handle the authentication duties (although GEM/DRM are perhaps not strictly required.. but they are something that exists in upstream kernel tree today). But short of mapping buffers into userspace process, there is currently no good way to pass these buffers to a v4l2 camera, or IVAHD video encoder/decoder. Possibly interface to video encoder/decoder IP can be thru the DRM display driver (another plugin, perhaps). Although that still leaves camera.
For me, right now GEM is interesting as a way to expose allocating of TILER buffers on OMAP4 for video encode/decode and display. DRI already provides a path in userspace to pass GEM buffers and use DRM to handle the authentication duties (although GEM/DRM are perhaps not strictly required.. but they are something that exists in upstream kernel tree today). But short of mapping buffers into userspace process, there is currently no good way to pass these buffers to a v4l2 camera, or IVAHD video encoder/decoder. Possibly interface to video encoder/decoder IP can be thru the DRM display driver (another plugin, perhaps). Although that still leaves camera.
And there was also a bit of discussion on the related topic of how to expose display to userspace.. fbdev is ancient legacy, v4l2 MCF is the new kid on the block, but DRM/KMS is what is used in the desktop world. It seems like MCF should be more flexible for building different sorts of graphs, and to handle oddball features like writeback-pipe on OMAP4. Although DRM/KMS is already handling hotplug, EDID parsing, and provides sufficient flexibility for building display graphs (fb -> crtc -> encoder -> connector). At this point I prefer sticking with DRM/KMS for mode setting so that normal uses can be exposed to userspace in normal ways.
At this point, it isn't clear what the conclusion will be. A more modularized DRM with buffer management more easily split out (or at least shared with other devices)? ION or GEM or some merger of the two? The BoF was just a short 1hr session to better define the problem. The next step will be follow up sessions during the Linaro Developer Summit in Budapest.
Subscribe to:
Posts (Atom)