| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
| |
According to QUIP-18 [1], all test files should be
LicenseRef-Qt-Commercial OR GPL-3.0-only
[1]: https://contribute.qt-project.org/quips/18
Pick-to: 6.7
Task-number: QTBUG-121787
Change-Id: I5e82161c6391caa1d44e8f3baac93a95ab80bfdb
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
Reviewed-by: Kai Köhne <kai.koehne@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Calling enqueue() should be equivalent to say() when the engine
is Ready, otherwise it should enqueue the text. The old
implementation only enqueued the text when the engine was
in Speaking state, overriding the current utterance when the
engine was already synthesizing, or paused.
Adjust test to enqueue the next text chunk as soon as the
engine transitions away from the Ready state.
Pick-to: 6.7 6.6
Fixes: QTBUG-122884
Change-Id: I19518a92d1ae73b01dc3de1d9ae6178f5f55b3ad
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
|
|
| |
"en-UK" would be a (non-existent) Ukrainian regional version of English.
After recent CLDR updates, this now fails, as it should, so fix it to
use "en-GB" to compare with Bob's expected "Oxford English" dialect.
Fixes: QTBUG-122950
Pick-to: 6.7 6.6
Change-Id: I5bc87d30b1f5f3f9804206c069be2ad5a7dc5d43
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
| |
This reverts commit 887f04a8ae406799b972f1c73b37b8e687a3e539.
Reason for revert: Root cause fixed with b93bcc2c9c2880461a9aab8384c61f5ddcfa30d6
Change-Id: I86ef1bcba10e1373d21822f8377a248fdd169ca4
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
COIN VM for current openSuSE targets (Leap 15.5) is provisioned without
dummy audio device. That makes all tests flaky and sayMultiple() fail.
Temporarily blacklist the test class on openSuSE.
Cherrypick this down to 6.5, because all versions are affected.
Fixes: QTBUG-120655
Pick-to: 6.7 6.6 6.5
Change-Id: I975890fb454a09eab2039b959748e4ef78908150
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Amends 428cae98619584a51c01be135fecf6b347c117b3. If we drop the RIFF
header bytes, then we have fewer bytes available as well, so decrease
that value. Otherwise we never fetch more data.
Improve the test, which should have caught this regression, by testing
that synthesise cycles correctly through the states even if no default
audio output is present. The test was completely skipped because of
that, not detecting the regression.
Fixes: QTBUG-118668
Pick-to: 6.6
Change-Id: I3b4276c13ce2c77ec718d1b7bbca6f3421636890
Reviewed-by: Jarkko Koivikko <jarkko.koivikko@code-q.fi>
|
|
|
|
|
|
|
|
|
|
|
| |
Gracefully handle the case where Multimedia is not available.
Also, as we now centrally check for Multimedia availability,
remove the individual checks.
Pick-to: 6.6
Fixes: QTBUG-117824
Change-Id: If40c7f98f1dfa48c91f504dbdf657067044860c3
Reviewed-by: Alexandru Croitor <alexandru.croitor@qt.io>
|
|
|
|
|
|
|
|
|
|
| |
By adding it to the default build flags via .cmake.conf.
This amends commit c641e462e2bd33972646bd20bc76f8dff4e6d01d.
Task-number: QTBUG-116296
Change-Id: Ie1e5567ef88843d2a85ec6be03cd4d72183ba269
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
They were only need for QML clients, for C++ client the version taking
a callable is more convenient. But QML clients cannot really use
QAudioFormat (there is no QML version of the type), and operating
on QByteArrays is also not something we want QML (or JavaScript code)
to do.
The engines still emit a signal, as that makes it easier to implement
the engine.
Change-Id: Ie24a41195cd5b7e27ec2b1562fb3f8e515c5adc3
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
Reviewed-by: Jeremy Whiting <jpwhiting@kde.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Formally, QAudioBuffer is the right type for carrying audio data, but
it's hardly used in Qt Multimedia itself, and not very practical to use
for writing the received PCM data to a file or to stream it out to a
QAudioSink (which operators on a QIODevice, e.g. with a byte array).
Nevertheless, allow a callback to take a QAudioBuffer instead of
QAudioFormat and QByteArray, as the QAudioBuffer facilities might be
useful for some use cases.
Change-Id: I260a4cf6cf91f57356373f4ef9cf248927159b40
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Android lacks this capability as the native TextToSpeech class has no
API for this. Since we now have the capabilities enum, add PauseResume
and don't list that for Android, so that applications can disable the
respective UI.
Do that for our examples - hide the pause & resume buttons if the
capability flag is not set.
Document the unsupported capabilities for all engines that lacks some
features
Fixes: QTBUG-113805
Change-Id: Ia8139e235f4cd968519423515e31c81285a2d349
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
|
|
| |
For UIs that just want to display the currently spoken word, this is
easier to connect to existing APIs. Also add the ID of the utterance,
as returned by enqueue().
The index and length into the overall string stay as the final
parameters.
Change-Id: I70bf35eeadd24540670cad2edd42126331796f4b
Reviewed-by: Fabian Kosmale <fabian.kosmale@qt.io>
|
|
|
|
|
| |
Change-Id: Ie4c32de764686bf0cf083ab0ad4aa20da4a53203
Reviewed-by: Fabian Kosmale <fabian.kosmale@qt.io>
|
|
|
|
|
|
|
|
| |
The text passed into the function is not said next, it's said at the end
of the currently pending texts.
Change-Id: Ic0ea885b65cdd8d7a055616fc9098d3c8cd3d397
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a map-type property that QML code can initialize to engine-specific
key/value mappings. Changing the property at runtime re-initializes the
engine.
Implement asnychronous initialization option in the mock engine so that
we can test those code paths better.
Change-Id: I0f2667b9b8e2339fa2e6966a2669f6f54ff2572a
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
Reviewed-by: Jarkko Koivikko <jarkko.koivikko@code-q.fi>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With QML, the engine property will always be set after the QTextToSpeech
object was created with the default engine. That QTextToSpeech's
default constructure (with the engine left empty) creates the default
engine implicitly is wasteful if it later gets replaced.
And what's worse, the engine was the only place where property values
are stored, and overriding the engine would not maintain those values.
This makes TextToSpeech susceptible to the order in which declared
properties are set, which breaks declarative programming.
Use the special engine name "none" to delay the creation of the engine
in the default constructor, and override the "engine" property to
intercept calls to getter and setter. Set the engine then in the
override of componentComplete.
And store set values for pitch, rate, and volume in the QTextToSpeech
object directly so that we can initialize the engine with those values,
no matter the order in which properties are set. This also allows us to
maintain the values from the old engine when changing engine.
We cannot do that for locale and voice, as those are engine-dependent.
However, it's not possible to set a voice directly anyway without
getting it from the engine first anyway, and the voice selector is only
executed at the end of componentComplete.
Add test to verify that all relevant properties have the right values,
even when setting or changing the engine later.
Change-Id: Ib162f87c1f9ceaad1fe8674c149290ac9141fccc
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
Reviewed-by: Qt CI Bot <qt_ci_bot@qt-project.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This makes it possible to select a voice declaratively, by specifying
the selection criteria in a grouped property. If the VoiceSelector is
specified as part of the element declaration, then the voice is selected
only once the component is completed and all selection criteria
set.
Since this adds a bunch of QML-specific code to the module, move the
existing QML binding code and module build files into a subdirectory.
Move the findVoices invokable from QTextToSpeech to the new QML-specific
subclass (it's not needed as a C++ API and was documented as internal in
QTextToSpeech).
Change-Id: I6f8907f53b513d1108f8446d57bef5975035163b
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
Reviewed-by: Jarkko Koivikko <jarkko.koivikko@code-q.fi>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
QVoice is not a type that can be constructed by client code, it can only
be constructed by the engine (as the QVoice instance carries the engine
specific identifier, which an application cannot know).
Applications that want to select a voice that matches a certain (set of)
criteria - for instance "a male English voice" - have to go through some
length to first get the list of all voices (availableVoices returns only
the voices for the current locale, so one first has to go through all
locales, sets each of them, and combine the lists of voices for each
locale), and then postprocess that list to find a voice that matches.
For C++, a variadic template function returns the list of voices that
matches an arbitrary set of criteria, in an arbitrary order. Each
argument is matched against the corresponding property of a voice based
on its type. It's possible to match voices against only language or
territory as well as a fully defined QLocale object.
For QML, the function takes a map from voice's property name to
value. Since QML cannot construct a locale with an "Any" territory
or language, a "language" property is added to the voice type so that
only the language attribute of the provided locale is compared.
Make this testable by providing the mock engine with support for a
parameter "voices". That parameter takes a list of std::tuple, as
otherwise we'd have to create a VoiceData struct type of sorts that the
test can use. Extend the list of built-in voices, and improve QVoice's
debug output by printing the full locale, not just the language.
Change-Id: I5266e65932ea3db52fae92a6f50caa14dbe1f2f6
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
| |
To verify that our APIs work from QML, no function testing performed.
Uses only the mock engine.
Change-Id: Ia4c28418cc5e72f6c54bcbb06e48d1d0677e73e5
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a sayNext() slot that doesn't stop any ongoing speech, and
instead enqueues the new text. The cross-platform implementation
keeps track of enqueued texts, and processes the next text when
the engine's state changes to ready.
Make this the default behavior for synthesize(). It makes no
sense to interrupt an ongoing process, the application can just
stop an ongoing process and discard the PCM data it doesn't want.
The stateChanged signal does not get emitted when the engine's
state changes to Ready and there are texts in the queue. To
allow applications to keep track of the text that is about to be
spoken, add a new aboutToSynthesize signal that gets emitted each
time text is about to be passed down to the engine. This also
allows applications to make last-minute changes to the voice
attributes.
To accurately keep track of which text within the data structure
of the application is about to be spoken or finished, applications
do need to keep track of the text segments passed QTextToSpeech
and update their "current" iterator with the aboutToSynthesize
signal.
Task-number: QTBUG-102355
Change-Id: I7b8621e15ee8d520b156e1fd771e120ded731fd8
Reviewed-by: Qt CI Bot <qt_ci_bot@qt-project.org>
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
| |
Amends c03afcc297bf250baff8d0693e4db0c8cc77eeed. We already supported
member functions that only take a QAudioFormat, so make this symmetrical
and support it for lambdas and free functions as well.
Change-Id: Ia8955ecd6ccc569aec326e469f0dd68306927218
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Some engines might support capabilities depending on the runtime
environment, such as the exact operating system version. Allow engines
to override a virtual capabilities() method for that. By default, the
meta-data from the plugin is used.
Remove hard-coded special cases for older Android version from the test.
Change-Id: I44713b3c5323f6a83713f1e2465920ab21788bab
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The function starts the synthesis as an asynchronous process, and
emits a signal 'synthesized()' (or calls a functor) with a chunk of
PCM data as a QByteArray, and the QAudioFormat in which the data is
encoded.
This requires a dependency to Qt Multimedia for Qt Speech for all
platforms; it has so far been required only with flite and winrt
backends.
Implemented for all engines, except speechd and macos engines where
it's not possible - these engines don't provide access to the data.
The test case verifies that the implementation is asynchronous, and
that it produces a reasonable amount of data. Since this involves
timer-based measurements, values need to be compared with some
appropriate margins.
The QML documentation of this API is omitted on purpose; the
QAudioFormat type is not available in QML, and we don't want to
encourage users to operate on raw bytes from QML anyway.
[ChangeLog][QtTextToSpeech][QTextToSpeech] Added the ability to
produce PCM data as a QByteArray. The QtTextToSpeech module now
depends on QtMultimedia on all platforms.
Fixes: QTBUG-109837
Change-Id: I308a3e18998827089c0f75789b720f1bd36e3c46
Reviewed-by: Qt CI Bot <qt_ci_bot@qt-project.org>
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is useful information in a UI that wants to visualize the progress
by highlighting the words and sentences as they get read. For this to
work, we ideally can emit data through a signal for each word that
allows an application to the progress information to the text that was
previously passed into QTextToSpeech::say, i.e the index and length
of the word within that text.
Implement this for all engines where we can, and add a test that
verifies that we get correct information:
On the macos and darwin backends, the delegate gets called for each
word about to be spoken, with index and length of the content relative
to the text. We don't get access to more detailed information, like
the length of the stream in second or samples, or the current playback
state.
Android provides an equivalent listener callback that tells us which
slice of the text is about to be spoken.
In the WinRT backend, we can ask the speech synthesizer to generate
track data for the generated audio, which gives us access for each
sentence and word, with the start time for each. Since we play the PCM
data ourselves, we don't get called with progress updates, but we can
use the track information to run a timer that iterates over the
boundaries with each tick. This has a risk of getting out of sync with
the actual playback though, but we can try to compensate for that.
We can use a similar strategy on flite, where the symbol tree provides
start times for each token. So we can use a timer, and follow the
progress through the input text for each token.
On speechd we don't have reliable access to anything; it theoretically
supports reporting of embedded <mark> tags when the input is SSML. So
for now, speechd cannot support this functionality.
Add highlighting of the spoken word to the Qt Quick example.
Change-Id: I36ff208b2f0112c9eb261864515ba20c4bf55f25
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
Reviewed-by: Qt CI Bot <qt_ci_bot@qt-project.org>
|
|
|
|
|
|
|
|
|
|
| |
Plugins can register through the plugin json file which capabilities
the engine supports, which then allows applications to check what
QTextToSpeech APIs they can use.
Change-Id: Id22ac55a3731591ed8bb53e8db76705de10e814f
Reviewed-by: Jeremy Whiting <jpwhiting@kde.org>
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
| |
The macro is declared as variadic, but in practice takes a only single
parameter with the skip-reason.
Pick-to: 6.5 6.4
Change-Id: Ica0f9dfcf94e09b0e15745313285c3b5ac89f8e7
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
| |
It's flakey/failing, but can not be reproduced locally.
Task-number: QTBUG-108205
Pick-to: 6.4
Change-Id: I4b17ca5570fbb4f2c4bf683c02dca4beb6c5108c
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a semantic patch using ClangTidyTransformator as in
qtbase/df9d882d41b741fef7c5beeddb0abe9d904443d8:
auto QtContainerClass = anyOf(
expr(hasType(cxxRecordDecl(isSameOrDerivedFrom(hasAnyName(classes))))).bind(o),
expr(hasType(namedDecl(hasAnyName(<classes>)))).bind(o));
makeRule(cxxMemberCallExpr(on(QtContainerClass),
callee(cxxMethodDecl(hasAnyName({"count", "length"),
parameterCountIs(0))))),
changeTo(cat(access(o, cat("size"), "()"))),
cat("use 'size()' instead of 'count()/length()'"))
a.k.a qt-port-to-std-compatible-api with config Scope: 'Container',
with the extended set of container classes recognized.
Change-Id: Ib7ee4af5785944e388b550285b7d67a585c38468
Reviewed-by: Qt CI Bot <qt_ci_bot@qt-project.org>
Reviewed-by: Jeremy Whiting <jpwhiting@kde.org>
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
init() used QTRY_VERIFY to assert that the speechd engine reported
ready after construction. The following QSKIP statement was not reached
if the QTRY_VERIFY failed.
This patch removes the QTRY_VERIFY, which leads to speechd being
skipped, when an error is reported after engine construction.
sayWithVoices created a test text for each available voice to be
spoken. The default speech-dispatcher installation on RHEL 9 provides
158 voices, which leads to the test failing with a timeout.
This patch ends the loop with a qWarning after 10 voices. Timeout is
thereby prevented, if any engine has too many.
Both fixes are combined in one commit, because they would fail CI
if staged separately.
Fixes: QTBUG-106286
Pick-to: 6.4
Change-Id: I8823bbf567c47229966041611f1defb3f75f9fc6
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
| |
Task-number: QTBUG-105718
Change-Id: I8ddfa6b3741acb3953446ef48d80bf5dde9f828d
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
Reviewed-by: Qt CI Bot <qt_ci_bot@qt-project.org>
|
|
|
|
|
| |
Change-Id: I203344f2e0c1c0335e8c98fef811a679f68eda1d
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
CMakeLists.txt and .cmake files of significant size
(more than 2 lines according to our check in tst_license.pl)
now have the copyright and license header.
Existing copyright statements remain intact
Task-number: QTBUG-88621
Change-Id: I947479e0fc6301e1622478a2c5b41a269dc8407e
Reviewed-by: Qt CI Bot <qt_ci_bot@qt-project.org>
Reviewed-by: Jörg Bornemann <joerg.bornemann@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We know that certain features and voices are broken with certain engines
due to bugs in the native APIs, and we can't work around those. Try to
avoid dysfunctional voices for speechd, and document bugs in native APIs
in the test code via QEXPECT_FAIL and QSKIP.
Remove BLACKLIST file.
Fixes: QTBUG-55274
Pick-to: 6.4
Change-Id: Ib8f26e60346ac7ca95f60433ea7e95879d0b0422
Reviewed-by: Qt CI Bot <qt_ci_bot@qt-project.org>
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
Reviewed-by: Jeremy Whiting <jpwhiting@kde.org>
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
|
|
|
| |
Only start measuring time when the engine starts speaking, and move the
connection and lambda outside the for-loop so that connections don't
accumulate.
Pick-to: 6.4
Change-Id: I6f0009ea6101a8c57c1d04e1a928d9f73296c5dc
Reviewed-by: Jeremy Whiting <jpwhiting@kde.org>
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
|
|
|
| |
Replace the current license disclaimer in files by
a SPDX-License-Identifier.
License files are organized under LICENSES directory.
Pick-to: 6.4
Task-number: QTBUG-67283
Change-Id: I5a15004abaab3f2d002adf47ae053b95abb41cb8
Reviewed-by: Jörg Bornemann <joerg.bornemann@qt.io>
|
|
|
|
|
|
|
|
| |
It's the default engine for both iOS and macOS.
Pick-to: 6.4
Change-Id: I02c9ac77f10ad6faccf32e6ed58399890cfc770c
Reviewed-by: Jarkko Koivikko <jarkko.koivikko@code-q.fi>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add an enum class ErrorReason for error handling in QTextToSpeech.
Rename BackendError to Error in enum QTextToSpeech::State, as the error
might not be in the synthesizer, but the audio playback, or due to
invalid input.
Update all sources and tests referring to BackendError to use new name.
Update documentation of enum QTextToSpeech::State, add documentation
for new enum QTextToSpeech::ErrorReason.
Add getters, setters, signals, and slots for error handling.
Implement error handling for flite plugin, covering audio backend,
initialization and synthesizing errors. For the other plugins, handle
basic initialization errors.
Change-Id: I233cf3876511176dd1a327546233d527596e1e7e
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
| |
Measure the time immediately in a lambda connected to the signal, rather
than wait for the QSignalSpy::wait function to return.
Change-Id: Ibbeb9ad310dc3319f0e58b37bc13f51b33d2595b
Reviewed-by: Jarkko Koivikko <jarkko.koivikko@code-q.fi>
|
|
|
|
|
|
|
| |
Qt Multimedia qFatal's when there is no backend, crashing the test.
Change-Id: Ib7d4db521a32e57dc00be6339501dd7b37b4a2e3
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The documentation claims that the AVFoundation based frameworks are
available from macOS 10.14 onwards, but the API for getting the voices
returns nothing.
Handle that in the test case for now. Ideally, our plugin infrastructure
would allow plugins to report such conditions so that the respective
engine gets ignored.
Change-Id: I0d7ab79acf41a9aea752e1789a74209faf689d3c
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
| |
Change-Id: Ife4e719899a84482ab1f0e95028081f51f737b7f
Reviewed-by: Qt CI Bot <qt_ci_bot@qt-project.org>
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
| |
We don't want to support it anyway. CI provisioning needs to be fixed.
Change-Id: I897f7c40178a35bb096ea470942ff54d6204756e
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
|
| |
The test failed with the mock engine on macOS on ARM. The mock engine
uses a QTimer that waits for a rate-dependent time on each word before
switching back to Ready state, so that should not be possible.
Change-Id: Id54bcdc6fa19b86d93208724fc839d81a95e0f4a
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
|
| |
On OpenSUSE, an old libspeechd is installed that we no longer want to
support. On Ubuntu, a newer version is installed but with broken
default voices from the mary-generic festival module.
Change-Id: Ifde0fcd7378313aa7817654d57c2f08796ab81be
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
| |
The NSSpeechSynthesizer functionality for that is broken, and we'll
remove that engine entirely.
Change-Id: I945c98961cc68db56466aff41927f07e6260cdea
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Those tests will fail, as the audio playback will either not start at
all or finish immediately.
Skip those tests, unless we use the mock engine.
Fixes: QTBUG-82545
Change-Id: Ib3b51a1b38928bcbea20f1515d231637608fd206
Reviewed-by: Volker Hilsheimer <volker.hilsheimer@qt.io>
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
|
|
|
|
|
|
|
|
|
|
|
|
| |
In every engine, the locale is a property of the voice, so store it in
the voice. This makes sense for applications that want to know which
language a voice speaks, and it allows us to simplify the code for
engines that used to store the current voice and locale: one is enough.
Change-Id: I9a01ebf2d5817accf820ac5dee6a7f24524195d7
Reviewed-by: Axel Spoerl <axel.spoerl@qt.io>
Reviewed-by: Jeremy Whiting <jpwhiting@kde.org>
Reviewed-by: Jarkko Koivikko <jarkko.koivikko@code-q.fi>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make the plugin depend on the jar target, otherwise the jar does
not get built when needed.
And since the Android engine initializes asynchronously we need
to always wait for the Ready state. This makes the tests for
properties pass; the tests waiting for the speaking to finish
abort the test run.
Fix the implementation of pitch and rate conversions to correctly
map from Qt's ranges of [-1.0, 1.0] to Android ranges of
(0.0, 2.0[, and [0.5, 2.0], respectively. Since Android operates
on floats a rate of e.g. 0.9 becomes a double of 0.89999 and the
test fails, so use string-based testing which allows us to limit
the precision.
Add a "warmup" utterance to the sayWithRates test so that the
first utterance doesn't take longer due to lazy initialization.
With these changes, the auto test now passes on Android.
Change-Id: Ie7698bcfdd60348441b1ea2d53cd2126d4355d5d
Reviewed-by: Qt CI Bot <qt_ci_bot@qt-project.org>
Reviewed-by: Jarkko Koivikko <jarkko.koivikko@code-q.fi>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The two properties are intertwined, changing one is very likely to
change the other: setting a locale in which the current voice is not
supported has to change voice; and setting a voice that the current
locale does not support has to change locale.
Emit signals if, and only if, the respective other attribute has been
modified after calling the engine on the setter. The engine's setter
implementation is responsible for picking a new voice or locale, and for
testing that the setter changed the value in the first place.
Adjust mock, flite, and speechd engines accordingly.
Change-Id: Iafd585b56a1a3c36b919ed6e6f29aabbd322e97c
Reviewed-by: Jarkko Koivikko <jarkko.koivikko@code-q.fi>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Leaving it to the engine implementations to ignore and return false in
setters when the new value is the same as the old value is error prone
and unnecessary, even if engines can sometimes chose a slightly more
optimal way to compare values.
Ensure that we have the right semantics in the QTextTosSpeech class
directly, and add test coverage.
Change-Id: Ie18b9bf4577e18f9fca9f70d1310856d1ab35f7b
Reviewed-by: Jarkko Koivikko <jarkko.koivikko@code-q.fi>
|