Documentation/Maemo 5 Developer Guide/Using Multimedia Components/Camera API Usage
This section explains how to use the Camera API to access the camera hardware that is present in some models of Nokia Internet Tablets.
Contents |
[edit] Camera Hardware and Linux
The Linux operating system supports live video and audio hardware, such as webcams, TV tuners, video capture cards, FM radio tuners, video output devices etc. The primary API for the applications to access those devices is Video4Linux2.
Video4Linux2 is a kernel API, so there must be kernel drivers for each supported device. At the user level, device access is standardized via device files. In the case of video capture devices like cameras, which are the focus of this material, the files would be /dev/video0
, /dev/video1
, etc., as many as there are connected devices.
Data exchanged between the device file and user-level application has a standardized format for each device class. This allows the application to be instantly compatible with every video capture device that has a driver for Linux.
The built-in cameras present in Maemo devices are compatible with Video-4-Linux version 2 API. In principle, any application compatible with this API is easily portable to the Maemo platform.
Since the Maemo platform delegates all multimedia handling to the GStreamer framework, applications that need access to the built-in camera should employ GStreamer for this, instead of directly accessing Video4Linux devices, via the v4l2src
GStreamer module.
Thanks to the flexibility of GStreamer, a developer can fully test any given application in a regular desktop PC with a connected webcam, and then perform the final test in the Internet Tablet itself, without a single change in the source code, since GStreamer refers to modules as text names.
One important note about the camera in the maemo device: only one application can use it at any given time. So, while using the camera in an application, other tasks that could possibly make use of it (e.g. a video call) will be blocked.
To ease developing an application that does video recording and still image capture, a high level gstreamer element has been developed. CameraBin follows the idea behind other high level elements like playbin2
. One can add capture support to an application without the need to deal with the lowlevel media-processing graph.
To demonstrate how the camera manipulation is performed, a low-level example application is provided and discussed.
[edit] Camera Manipulation in C Language
This C application allows to use the Maemo device as a "mirror" (i.e. showing the camera input in the screen), as well as allows to take pictures to be saved as JPEG files (which illustrates the video frame buffer manipulation).
In this example, the function initialize_pipeline()
is most interesting, since it is responsible for creating the GStreamer pipeline, sourcing data from Video4Linux and sinking it to a xvimagesink
(which is an optimized X framebuffer). The pipeline scheme is as follows (see example_camera.c in maemo-examples):
/* Initialize the the Gstreamer pipeline. Below is a diagram
* of the pipeline that will be created:
*
* |Screen| |Screen|
* ->|queue |->|sink |-> Display
* |Camera| |CSP | |Tee|/
* |src |->|Filter|->| | |Image| |Image | |Image|
* ->|queue|-> |filter|->|sink |-> JPEG file
*/
Between the source and sinks, there are two ffmpegcolorspace
filters, one to configure the camera frame rate and the picture size expected by the JPEG encoder, and the second to satisfy the video sink. Capabilities ("caps") are employed to tell which format the data needs to have when exiting the filter.
The second filter is necessary, since the video sink may have different requirements (bit depth, color space) from the JPEG encoder. Those requirements can vary also according to the hardware.
Because there are two sinks, the queues are important, since they guarantee that each pipeline segment operates on its own thread downstream the queue. This ensures that the different sinks can synchronize without waiting for each other.
This sample application is not different from other GStreamer applications, be it Linux-generic or Maemo-specific apps (see example_camera.c in maemo-examples):
static gboolean initialize_pipeline(AppData *appdata, int *argc, char ***argv) { GstElement *pipeline, *camera_src, *screen_sink, *image_sink; GstElement *screen_queue, *image_queue; GstElement *csp_filter, *image_filter, *tee; GstCaps *caps; GstBus *bus; /* Initialize Gstreamer */ gst_init(argc, argv); /* Create pipeline and attach a callback to it's * message bus */ pipeline = gst_pipeline_new("test-camera"); bus = gst_pipeline_get_bus(GST_PIPELINE(pipeline)); gst_bus_add_watch(bus, (GstBusFunc)bus_callback, appdata); gst_object_unref(GST_OBJECT(bus)); /* Save pipeline to the AppData structure */ appdata->pipeline = pipeline; /* Create elements */ /* Camera video stream comes from a Video4Linux driver */ camera_src = gst_element_factory_make(VIDEO_SRC, "camera_src"); /* Colorspace filter is needed to make sure that sinks understands * the stream coming from the camera */ csp_filter = gst_element_factory_make("ffmpegcolorspace", "csp_filter"); /* Tee that copies the stream to multiple outputs */ tee = gst_element_factory_make("tee", "tee"); /* Queue creates new thread for the stream */ screen_queue = gst_element_factory_make("queue", "screen_queue"); /* Sink that shows the image on screen. Xephyr doesn't support XVideo * extension, so it needs to use ximagesink, but the device uses * xvimagesink */ screen_sink = gst_element_factory_make(VIDEO_SINK, "screen_sink"); /* Creates separate thread for the stream from which the image * is captured */ image_queue = gst_element_factory_make("queue", "image_queue"); /* Filter to convert stream to use format that the gdkpixbuf library * can use */ image_filter = gst_element_factory_make("ffmpegcolorspace", "image_filter"); /* A dummy sink for the image stream. Goes to bitheaven */ image_sink = gst_element_factory_make("fakesink", "image_sink"); /* Check that elements are correctly initialized */ if(!(pipeline && camera_src && screen_sink && csp_filter && screen_queue && image_queue && image_filter && image_sink)) { g_critical("Couldn't create pipeline elements"); return FALSE; } /* Set image sink to emit handoff-signal before throwing away * it's buffer */ g_object_set(G_OBJECT(image_sink), "signal-handoffs", TRUE, NULL); /* Add elements to the pipeline. This has to be done prior to * linking them */ gst_bin_add_many(GST_BIN(pipeline), camera_src, csp_filter, tee, screen_queue, screen_sink, image_queue, image_filter, image_sink, NULL); /* Specify what kind of video is wanted from the camera */ caps = gst_caps_new_simple("video/x-raw-rgb", "width", G_TYPE_INT, 640, "height", G_TYPE_INT, 480, NULL); /* Link the camera source and colorspace filter using capabilities * specified */ if(!gst_element_link_filtered(camera_src, csp_filter, caps)) { return FALSE; } gst_caps_unref(caps); /* Connect Colorspace Filter -> Tee -> Screen Queue -> Screen Sink * This finalizes the initialization of the screen-part of the pipeline */ if(!gst_element_link_many(csp_filter, tee, screen_queue, screen_sink, NULL)) { return FALSE; } /* gdkpixbuf requires 8 bits per sample which is 24 bits per * pixel */ caps = gst_caps_new_simple("video/x-raw-rgb", "width", G_TYPE_INT, 640, "height", G_TYPE_INT, 480, "bpp", G_TYPE_INT, 24, "depth", G_TYPE_INT, 24, "framerate", GST_TYPE_FRACTION, 15, 1, NULL); /* Link the image-branch of the pipeline. The pipeline is * ready after this */ if(!gst_element_link_many(tee, image_queue, image_filter, NULL)) return FALSE; if(!gst_element_link_filtered(image_filter, image_sink, caps)) return FALSE; gst_caps_unref(caps); /* As soon as screen is exposed, window ID will be advised to the sink */ g_signal_connect(appdata->screen, "expose-event", G_CALLBACK(expose_cb), screen_sink); gst_element_set_state(pipeline, GST_STATE_PLAYING); return TRUE; }
The following function is called back when the user has pressed the "Take photo" button, and the image sink has data. It will forward the image buffer to create_jpeg()
(see example_camera.c in maemo-examples):
/* This callback will be registered to the image sink * after user requests a photo */ static gboolean buffer_probe_callback( GstElement *image_sink, GstBuffer *buffer, GstPad *pad, AppData *appdata) { GstMessage *message; gchar *message_name; /* This is the raw RGB-data that image sink is about * to discard */ unsigned char *data_photo = (unsigned char *) GST_BUFFER_DATA(buffer); /* Create a JPEG of the data and check the status */ if(!create_jpeg(data_photo)) message_name = "photo-failed"; else message_name = "photo-taken"; /* Disconnect the handler so no more photos * are taken */ g_signal_handler_disconnect(G_OBJECT(image_sink), appdata->buffer_cb_id); /* Create and send an application message which will be * catched in the bus watcher function. This has to be * sent as a message because this callback is called in * a gstreamer thread and calling GUI-functions here would * lead to X-server synchronization problems */ message = gst_message_new_application(GST_OBJECT(appdata->pipeline), gst_structure_new(message_name, NULL)); gst_element_post_message(appdata->pipeline, message); /* Returning TRUE means that the buffer can is OK to be * sent forward. When using fakesink this doesn't really * matter because the data is discarded anyway */ return TRUE; }
The xvimagesink
GStreamer module will normally create a new window just for itself. Since the video is supposed to be shown inside the main application window, the X-Window window ID needs to be passed to the module, as soon as the ID exists (see example_camera.c in maemo-examples):
/* Callback to be called when the screen-widget is exposed */ static gboolean expose_cb(GtkWidget * widget, GdkEventExpose * event, gpointer data) { /* Tell the xvimagesink/ximagesink the x-window-id of the screen * widget in which the video is shown. After this the video * is shown in the correct widget */ gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(data), GDK_WINDOW_XWINDOW(widget->window)); return FALSE; }
For the sake of completeness, it follows the JPEG encoding function. It is worthwhile to mention that the buffer that came from GStreamer is a simple linear framebuffer (see example_camera.c in maemo-examples):
/* Creates a jpeg file from the buffer's raw image data */ static gboolean create_jpeg(unsigned char *data) { GdkPixbuf *pixbuf = NULL; GError *error = NULL; guint height, width, bpp; const gchar *directory; GString *filename; guint base_len, i; struct stat statbuf; width = 640; height = 480; bpp = 24; /* Define the save folder */ directory = SAVE_FOLDER_DEFAULT; if(directory == NULL) { directory = g_get_tmp_dir(); } /* Create an unique file name */ filename = g_string_new(g_build_filename(directory, PHOTO_NAME_DEFAULT, NULL)); base_len = filename->len; g_string_append(filename, PHOTO_NAME_SUFFIX_DEFAULT); for(i = 1; !stat(filename->str, &statbuf); ++i) { g_string_truncate(filename, base_len); g_string_append_printf(filename, "%d%s", i, PHOTO_NAME_SUFFIX_DEFAULT); } /* Create a pixbuf object from the data */ pixbuf = gdk_pixbuf_new_from_data(data, GDK_COLORSPACE_RGB, /* RGB-colorspace */ FALSE, /* No alpha-channel */ bpp/3, /* Bits per RGB-component */ width, height, /* Dimensions */ 3*width, /* Number of bytes between lines (ie stride) */ NULL, NULL); /* Callbacks */ /* Save the pixbuf content's in to a jpeg file and check for * errors */ if(!gdk_pixbuf_save(pixbuf, filename->str, "jpeg", &error, NULL)) { g_warning("%s\n", error->message); g_error_free(error); gdk_pixbuf_unref(pixbuf); g_string_free(filename, TRUE); return FALSE; } /* Free allocated resources and return TRUE which means * that the operation was succesful */ g_string_free(filename, TRUE); gdk_pixbuf_unref(pixbuf); return TRUE; }
- This page was last modified on 17 August 2010, at 09:49.
- This page has been accessed 30,344 times.