Qt includes a framework for gesture programming that has the ability to form gestures from a series of events, independently of the input methods used. A gesture could be a particular movement of a mouse, a touch screen action, or a series of events from some other source. The nature of the input, the interpretation of the gesture and the action taken are the choice of the developer.
QGesture is the central class in Qt's gesture framework, providing a container for information about gestures performed by the user. QGesture exposes properties that give general information that is common to all gestures, and these can be extended to provide additional gesture-specific information. Common panning, pinching and swiping gestures are represented by specialized classes: QPanGesture, QPinchGesture and QSwipeGesture.
Developers can also implement new gestures by subclassing and extending the QGestureRecognizer class. Adding support for a new gesture involves implementing code to recognize the gesture from input events. This is described in the Creating Your Own Gesture Recognizer section.
Gestures can be enabled for instances of QWidget and QGraphicsObject subclasses. An object that accepts gesture input is referred to throughout the documentation as a target object.
To enable a gesture for a target object, call its QWidget::grabGesture() or QGraphicsObject::grabGesture() function with an argument describing the required gesture type. The standard types are defined by the Qt::GestureType enum and include many commonly used gestures.
for (Qt::GestureType gesture : gestures) grabGesture(gesture);
In the above code, the gestures are set up in the constructor of the target object itself.
When the user performs a gesture, QGestureEvent events will be delivered to the target object, and these can be handled by reimplementing the QWidget::event() handler function for widgets or QGraphicsItem::sceneEvent() for graphics objects.
As one target object can subscribe to more than one gesture type, the QGestureEvent can contain more than one QGesture, indicating several possible gestures are active at the same time. It is then up to the widget to determine how to handle those multiple gestures and choose if some should be canceled in favor of others.
Each QGesture contained within a QGestureEvent object can be accepted() or ignored() individually, or all together. Additionally, you can query the individual QGesture data objects (the state) using several getters.
A QGesture is by default accepted when it arrives at your widget. However, it is good practice to always explicitly accept or reject a gesture. The general rule is that, if you accept a gesture, you are using it. If you are ignoring it you are not interested in it. Ignoring a gesture may mean it gets offered to another target object, or it will get canceled.
Each QGesture has several states it goes through; there is a well defined way to change the state, typically the user input is the cause of state changes (by starting and stopping interaction, for instance) but the widget can also cause state changes.
The first time a particular QGesture is delivered to a widget or graphics item, it will be in the Qt::GestureStarted state. The way you handle the gesture at this point influences whether you can interact with it later.
Using QGesture::CancelAllInContext to cancel a gesture will cause all gestures, in any state, to be canceled unless they are explicitly accepted. This means that active gestures on children will get canceled. It also means that gestures delivered in the same QGestureEvent will get canceled if the widget ignores them. This can be a useful way to filter out all gestures except the one you are interested in.
For convenience, the Image Gestures Example reimplements the general event() handler function and delegates gesture events to a specialized gestureEvent() function:
bool ImageWidget::event(QEvent *event) { if (event->type() == QEvent::Gesture) return gestureEvent(static_cast<QGestureEvent*>(event)); return QWidget::event(event); }
The gesture events delivered to the target object can be examined individually and dealt with appropriately:
bool ImageWidget::gestureEvent(QGestureEvent *event) { qCDebug(lcExample) << "gestureEvent():" << event; if (QGesture *swipe = event->gesture(Qt::SwipeGesture)) swipeTriggered(static_cast<QSwipeGesture *>(swipe)); else if (QGesture *pan = event->gesture(Qt::PanGesture)) panTriggered(static_cast<QPanGesture *>(pan)); if (QGesture *pinch = event->gesture(Qt::PinchGesture)) pinchTriggered(static_cast<QPinchGesture *>(pinch)); return true; }
Responding to a gesture is simply a matter of obtaining the QGesture object delivered in the QGestureEvent sent to the target object and examining the information it contains.
void ImageWidget::swipeTriggered(QSwipeGesture *gesture) { if (gesture->state() == Qt::GestureFinished) { if (gesture->horizontalDirection() == QSwipeGesture::Left || gesture->verticalDirection() == QSwipeGesture::Up) { qCDebug(lcExample) << "swipeTriggered(): swipe to previous"; goPrevImage(); } else { qCDebug(lcExample) << "swipeTriggered(): swipe to next"; goNextImage(); } update(); } }
Here, we examine the direction in which the user swiped the widget and modify its contents accordingly.
Adding support for a new gesture involves creating and registering a new gesture recognizer. Depending on the recognition process for the gesture, it may also involve creating a new gesture object.
To create a new recognizer, you need to subclass QGestureRecognizer to create a custom recognizer class. There is one virtual function that you must reimplement and two others that can be reimplemented as required.
The recognize() function must be reimplemented. This function handles and filters the incoming input events for the target objects and determines whether or not they correspond to the gesture the recognizer is looking for.
Although the logic for gesture recognition is implemented in this function, possibly using a state machine based on the Qt::GestureState enums, you can store persistent information about the state of the recognition process in the QGesture object supplied.
Your recognize() function must return a value of QGestureRecognizer::Result that indicates the state of recognition for a given gesture and target object. This determines whether or not a gesture event will be delivered to a target object.
If you choose to represent a gesture by a custom QGesture subclass, you will need to reimplement the create() function to construct instances of your gesture class instead of standard QGesture instances. Alternatively, you may want to use standard QGesture instances, but add additional dynamic properties to them to express specific details of the gesture you want to handle.
If you use custom gesture objects that need to be reset or otherwise specially handled when a gesture is canceled, you need to reimplement the reset() function to perform these special tasks.
Note that QGesture objects are only created once for each combination of target object and gesture type, and they might be reused every time the user attempts to perform the same gesture type on the target object. As a result, it can be useful to reimplement the reset() function to clean up after each previous attempt at recognizing a gesture.
To use a gesture recognizer, construct an instance of your QGestureRecognizer subclass, and register it with the application with QGestureRecognizer::registerRecognizer(). A recognizer for a given type of gesture can be removed with QGestureRecognizer::unregisterRecognizer().
The Image Gestures Example shows how to enable gestures for a widget in a simple image viewer application.
Qt Quick does not have a generic global gesture recognizer; rather, individual components can respond to touch events in their own ways. For example the PinchArea handles two-finger gestures, Flickable is for flicking content with a single finger, and MultiPointTouchArea can handle an arbitrary number of touch points and allow the application developer to write custom gesture recognition code.