/**************************************************************************** ** ** Copyright (C) 2011 Nokia Corporation and/or its subsidiary(-ies). ** Contact: http://www.qt-project.org/ ** ** This file is part of the documentation of the Qt Toolkit. ** ** $QT_BEGIN_LICENSE:FDL$ ** GNU Free Documentation License ** Alternatively, this file may be used under the terms of the GNU Free ** Documentation License version 1.3 as published by the Free Software ** Foundation and appearing in the file included in the packaging of ** this file. ** ** Other Usage ** Alternatively, this file may be used in accordance with the terms ** and conditions contained in a signed written agreement between you ** and Nokia. ** ** ** ** ** ** $QT_END_LICENSE$ ** ****************************************************************************/ /*! \page cameraoverview.html \title Camera Overview \brief Camera viewfinder, still image capture, and video recording. The Qt Multimedia API provides a number of camera related classes, so you can access images and videos from mobile device cameras or webcameras. There are both C++ and QML apis for common tasks. \section1 Camera Features In order to use the camera classes a quick overview of the way a camera works is needed. If you're already familiar with this, you can skip ahead to \l {camera-tldr}{Camera implementation details}. [TBD - this needs a diagram] * Camera features * lens -> sensors -> image processing -> capture/recording \section2 The lens assembly At one end of the camera assembly is the lens assembly (one or more lenses, arranged to focus light onto the sensor). The lens themselves can sometimes be moved to adjust things like focus and zoom, or they might be fixed in an arrangement to give a good balance between objects in focus, and cost. Some lens assemblies can automatically be adjusted so that an object at different distances from the camera can be kept in focus. This is usually done by measuring how sharp a particular area of the frame is, and by adjusting the lens assembly until it is maximally sharp. In some cases the camera will always use the center of the frame for this. Other cameras may also allow the region to focus to be specified (for "touch to zoom", or "face zoom" features). \section2 The sensor Once light arrives at the sensor, it gets converted into digital pixels. This process can depend on a number of things but ultimately comes down to two things - how long the conversion is allowed to take, and how bright the light is. The longer a conversion can take, the better the quality. Using a flash can assist with letting more light hit the sensor, allowing it to convert pixels faster, giving better quality for the same amount of time. Conversely, allowing a longer conversion time can let you take photos in darker environments, as long as the camera is steady. \section2 Image processing After the image has been captured by the sensor, the camera firmware performs various image processing tasks on it to compensate for various sensor characteristics, current lighting, and desired image properties. Faster sensor pixel conversion times tend to introduce digital noise, so some amount of image processing can be done to remove this based on the camera sensor settings. The color of the image can also be adjusted at this stage to compensate for different light sources - fluorescent lights and sunlight give very different appearances to the same object, so the image can be adjusted based on the white balance of the picture (due to the different color temperatures of the light sources). Some forms of "special effects" can also be performed at this stage. Black and white, sepia, or "negative" style images can be produced. \section2 Recording for posterity Finally, once a perfectly focused, exposed and processed image has been created, it can be put to good use. Camera images can be further processed by application code (for example, to detect barcodes, or to stitch together), or saved to a common format like JPEG, or used to create a movie. Many of these tasks have classes to assist them. \target camera-tldr \section1 Camera Implementation Details \section2 Viewfinder While not strictly necessary, it's often useful to be able to see what the camera is pointing at. Most digital cameras allow an image feed from the camera sensor at a lower resolution (usually up to the size of the display of the camera) so you can compose a photo or video, and then switch to a slower but higher resolution mode for capturing the image. Depending on whether you're using QML or C++, you can do this in multiple ways. In QML, you can use the Camera and VideoOutput elements together to show a simple viewfinder: \qml Camera { id: camera // You can adjust various settings in here } VideoOutput { source: camera } \endqml In C++, your choice depends on whether you are using widgets, or QGraphicsView. The \l QVideoWidget class is used in the widgets case, and \l QGraphicsVideoItem is useful for QGraphicsView. \snippet doc/src/snippets/multimedia-snippets/camera.cpp Camera overview viewfinder For advanced usage (like processing viewfinder frames as they come, to detect objects or patterns), you can also derive from \l QAbstractVideoSurface and set that as the viewfinder for the QCamera object. In this case you will need to render the viewfinder image yourself. \snippet doc/src/snippets/multimedia-snippets/camera.cpp Camera overview surface \section2 Still Images After setting up a viewfinder and finding something photogenic, to capture an image we need to initialize a new QCameraImageCapture object. All that is then needed is to start the camera, lock it so that things are in focus and the settings are not different from the viewfinder while the image capture occurs, capture the image, and finally unlock the camera ready for the next photo. \snippet doc/src/snippets/multimedia-snippets/camera.cpp Camera overview capture \section2 Movies Previously we saw code that allowed the capture of a still image. Recording video requires the use of a \l QMediaRecorder object. To record video we need to create a camera object as before but this time as well as creating a viewfinder, we will also initialize a media recorder object. \snippet doc/src/snippets/multimedia-snippets/camera.cpp Camera overview movie Signals from the \e mediaRecorder can be connected to slots to react to changes in the state of the recorder or error events. Recording itself starts with the \l {QMediaRecorder::record()}{record()} function of mediaRecorder being called, this causes the signal \l {QMediaRecorder::stateChanged()}{stateChanged()} to be emitted. The recording process can be changed with the \l {QMediaRecorder::record()}{record()}, \l {QMediaRecorder::stop()}{stop()} and \l {QMediaRecorder::setMuted()}{setMuted()} slots in \l QMediaRecorder. \section2 Controlling the imaging pipeline Now that the basics of capturing images or movies are covered, there are a number of ways to control the imaging pipeline to implement some interesting techniques. As explained earlier, several physical and electronic elements combine to determine the final images, and you can control them with different classes. \section3 Focus and zoom Focusing (and zoom) is managed primarily by the \l QCameraFocus class. QCameraFocus allows the developer to set the general policy by means of the enums for the \l {QCameraFocus::FocusMode}{FocusMode} and the \l {QCameraFocus::FocusPointMode}{FocusPointMode}. \l {QCameraFocus::FocusMode}{FocusMode} deals with settings such as \l {QCameraFocus::FocusMode}{AutoFocus}, \l {QCameraFocus::FocusMode}{ContinuousFocus} and \l {QCameraFocus::FocusMode}{InfinityFocus}, whereas \l {QCameraFocus::FocusMode}{FocusPointMode} deals with the various focus zones within the view that are used for autofocus modes. \l {QCameraFocus::FocusMode}{FocusPointMode} has support for face recognition (where the camera supports it), center focus and a custom focus where the focus point can be specified. For camera hardware that supports it, \l {QCameraFocus::FocusMode}{Macro focus} allows imaging of things that are close to the sensor. This is useful in applications like barcode recognition, or business card scanning. In addition to focus, QCameraFocus allows you to control any available optical or digital zoom. In general, optical zoom is higher quality, but more expensive to manufacture, so the available zoom range might be limited (or fixed to unity). \section3 Exposure, aperture, shutter speed and flash There are a number of settings that affect the amount of light that hits the camera sensor, and hence the quality of the resulting image. The \l QCameraExposure class allows you to adjust these settings. You can use this class to implement some techniques like High Dynamic Range (HDR) photos by locking the exposure parameters (with \l {QCamera::lock()}), or motion blur by setting slow shutter speeds with small apertures. The main settings for automatic image taking are the \l {QCameraExposure::ExposureMode}{exposure mode} and \l {QCameraExposure::FlashMode}{flash mode}. Several other settings (aperture, ISO setting, shutter speed) are usually managed automatically but can also be overridden if desired. You can also adjust the \l {QCameraExposure::meteringMode()} to control which parts of the camera frame to measure exposure at. Some camera implementations also allow you to specify a specific point that should be used for exposure metering - this is useful if you can let the user touch or click on an interesting part of the viewfinder, and then use this point so that the image exposure is best at that point. Finally, you can control the flash hardware (if present) using this class. In some cases the hardware may also double as a torch (typically when the flash is LED based, rather than a xenon or other bulb). See also the \l {Torch} QML element for an easy to use API for torch functionality. \section3 Image processing The QCameraImageProcessing class lets you adjust the image processing part of the pipeline. This includes the \l {QCameraImageCapture::WhiteBalanceMode}{white balance} (or color temperature), \l {QCameraImageCapture::contrast()}{contrast}, \l {QCameraImageCapture::saturation()}{saturation}, \l {QCameraImageCapture::setSharpening()}{sharpening} and \l {QCameraImageCapture::setDenoisingLevel()}{denoising}. Most cameras support automatic settings for all of these, so you shouldn't need to adjust them unless the user wants a specific setting. If you're taking a series of images (for example, to stitch them together for a panoramic image), you should lock the image processing settings so that all the images taken appear similar with \e {QCamera::lock(QCamera::LockWhiteBalance)}/ \section3 Canceling Asynchronous Operations Various operations such as image capture and auto focusing occur asynchrously. These operations can often be canceled by the start of a new operation as long as this is supported by the camera. For image capture, the operation can be canceled by calling \l {QCameraImageCapture::cancelCapture()}{cancelCapture()}. For AutoFocus, autoexposure or white balance cancellation can be done by calling \e {QCamera::unlock(QCamera::LockFocus)}. \section1 Examples There are both C++ and QML examples available. C++ Examples: \list \o \l Camera \endlist QML Examples: \list \o \l declarative-camera \o \l qmlvideofx \endlist \section1 Reference Documentation \section2 Camera Classes \annotatedlist multimedia_camera \section2 QML Elements \section2 QML Elements \list \o \l Radio \o \l RadioData \endlist */