Introduction
openQA is an automated test tool that makes it possible to test the wholeinstallation process of an operating system. It uses virtual machines toreproduce the process, check the output (both serial console andscreen) in every step and send the necessary keystrokes and commands toproceed to the next. openQA can check whether the system can be installed,whether it works properly in 'live' mode, whether applications workor whether the system responds as expected to different installation options andcommands.
Even more importantly, openQA can run several combinations of tests for everyrevision of the operating system, reporting the errors detected for eachcombination of hardware configuration, installation options and variant of theoperating system.
openQA is free software released under theGPLv2 license. The source code anddocumentation are hosted in the os-autoinstorganization on GitHub.
This document describes the general operation and usage of openQA. The main goalis to provide a general overview of the tool, with all the information needed tobecome a happy user. More advanced topics like installation, administration ordevelopment of new tests are covered by further documents available in theofficial repository.
Architecture
Although the project as a whole is referred to as openQA, there are in factseveral components that are hosted in separate repositories as shown inthe following figure.
Figure 1. openQA architecture
The heart of the test engine is a standalone application called'os-autoinst' (blue). In each execution, this application creates avirtual machine and uses it to run a set of test scripts (red).'os-autoinst' generates a video, screenshots and a JSON file withdetailed results.
'openQA' (green) on the other hand provides a web based userinterface and infrastructure to run 'os-autoinst' in a distributedway. The web interface also provides a JSON based REST-like API forexternal scripting and for use by the worker program. Workersfetch data and input files from openQA for os-autoinst to run thetests. A host system can run several workers. The openQA webapplication takes care of distributing test jobs among workers. Webapplication and workers don’t have to run on the same machine butcan be connected via network instead.
Basic concepts
Glossary
The following terms are used within the context of openQA
test modules | an individual test case in a single perl module file, e.g."sshxterm". If not further specified a test module is denoted with its "shortname" equivalent to the filename including the test definition. The "full name"is composed of the test group (TBC), which itself is formed by the top-folderof the test module file, and the short name, e.g. "x11-sshxterm" (forx11/sshxterm.pm) |
test suite | a collection of test modules, e.g. "textmode". All testmodules within one test suite are run serially |
job | one run of individual test cases in a row denoted by a unique number forone instance of openQA, e.g. one installation with subsequent testing ofapplications within gnome |
test run | equivalent to job |
test result | the result of one job, e.g. "passed" with the details of eachindividual test module |
test step | the execution of one test module within a job |
distri | a test distribution but also sometimes referring to a product(CAUTION: ambiguous, historically a "GNU/Linux distribution"), composed ofmultiple test modules in a folder structure that compose test suites, e.g."opensuse" (test distribution, short for "os-autoinst-distri-opensuse") |
product | the main "system under test" (SUT), e.g. "openSUSE" |
job group | equivalent to product, used in context of the webUI |
version | one version of a product, don’t confuse with builds, e.g."Tumbleweed" |
flavor | a specific variant of a product to distinguish differing variants,e.g. "DVD" |
arch | an architecture variant of a product, e.g. "x86_64" |
machine | additional variant of machine, e.g. used for "64bit", "uefi", etc. |
scenario | A composition of<distri>-<version>-<flavor>-<arch>-<test_suite>@<machine>, e.g."openSUSE-Tumbleweed-DVD-x86_64-gnome@64bit", nicknamed koala |
build | Different versions of a product as tested, can be considered a"sub-version" of version, e.g. "Build1234"; CAUTION: ambiguity: either withthe prefix "Build" included or not |
Jobs
One of the most important features of openQA is that it can be used to testseveral combinations of actions and configurations. For every one of thosecombinations, the system creates a virtual machine, performs certain steps andreturns an overall result. Every one of those executions is called a 'job'.Every job is labeled with a numeric identifier and has several associated'settings' that will drive its behavior.
A job goes through several states:
scheduled Initial state for recently created jobs. Queued for futureexecution.
running In progress.
cancelled The job was explicitly cancelled by the user or was replaced by aclone (see below).
waiting The job is in 'interactive mode' (see below) and waiting for input.
done Execution finished.
Jobs in state 'done' have typically gone through a whole sequence of steps(called 'testmodules') each one with its own result. But in addition to thosepartial results, a finished job also provides an overall result from thefollowing list.
none For jobs that have not reached the 'done' state.
passed No critical check failed during the process. It doesn’t necessarilymean that all testmodules were successful or that no single assertion failed.
failed At least one assertion considered to be critical was not satisfied at somepoint.
softfailed At least one non-critical assertion was not satisfied at somepoint (eg. a softfailure has been recorded explicitly via record_soft_failure)or workaround needles are in place.
incomplete The job is no longer running but no result was provided. Eitherit was cancelled while running or it crashed.
Sometimes, the reason of a failure is not an error in the tested operating systemitself, but an outdated test or a problem in the execution of the job for someexternal reason. In those situations, it makes sense to re-run a given job fromthe beginning once the problem is fixed or the tests have been updated.This is done by means of 'cloning'. Every job can be superseded by a clone whichis scheduled to run with exactly the same settings as the original job. If theoriginal job is still not in 'done' state, it’s cancelled immediately.From that point in time, the clone becomes the current version and the originaljob is considered outdated (and can be filtered in the listing) but itsinformation and results (if any) are kept for future reference.
Needles
One of the main mechanisms for openQA to know the state of the virtual machineis checking the presence of some elements in the machine’s 'screen'.This is performed using fuzzy image matching between the screen and the socalled 'needles'. A needle specifies both the elements to search for and alist of tags used to decide which needles should be used at any moment.
A needle consists of a full screenshot in PNG format and a json file withthe same name (e.g. foo.png and foo.json) containing the associated data, likewhich areas inside the full screenshot are relevant or the mentioned list oftags.
{ "area" : [ { "xpos" : INTEGER, "ypos" : INTEGER, "width" : INTEGER, "height" : INTEGER, "type" : ( "match" | "ocr" | "exclude" ), "match" : INTEGER, // 0-100. similarity percentage }, ... ], "tags" : [ STRING, ... ]}
Areas
There are three kinds of areas:
Regular areas define relevant parts of the screenshot. Those must matchwith at least the specified similarity percentage. Regular areas aredisplayed as green boxes in the needle editor and as green or red framesin the needle view (green for matching areas, red for non-matching ones).
OCR areas also define relevant parts of the screenshot. However, an OCRalgorithm is used for matching. In the needle editor OCR areas aredisplayed as orange boxes. To turn a regular area into an OCR area withinthe needle editor, double click the concerning area twice. Note that suchneedles are only rarely used.
Exclude areas can be used to ignore parts of the reference picture.In the needle editor exclude areas are displayed as red boxes. To turn aregular area into an exclude area within the needle editor, double clickthe concerning area.In the needle view exclude areas are displayed as gray boxes.
Interactive mode
There are several points in time during the execution of a job at which openQAtries to match the screen with the available needles, reacting to the result ofthat check. If the job is running in interactive mode it will stop the executionat that point, freezing the virtual machine and waiting for user input beforeproceeding. At that moment, the user can modify the existing needles or cancreate a new one using as a starting point either the current screen of thevirtual machine or one of the existing needles. Once the needles are adjusted,the user can command the job to reload the list of needles and continue with theexecution.
enable interactive mode Get into waiting for input automatically (ie. waitforneedle)in case it can not find the matched needle and timeout.
stop waiting for needle Stop the waitforneedle call immediately without timeout.
continue waiting for needle Continue testing but will get into waitforneedlein case it can not find the matched needle and timeout.
reload needles and retry Retries after 5 seconds and reloads needles. It helps ifa new needle is created before retry.
open needle editor Opens needle editor so the user can create a new needle ormodify the existing ones.
The interactive mode is especially useful when creating needles for a newoperating system or when the look & feel have changed and several needles needto be adjusted accordingly.
Access management
Some actions in openQA require special privileges. openQA providesauthentication through openID. By default,openQA is configured to use the openSUSE openID provider, but it can veryeasily be configured to use any other valid provider. Every time a new user logsinto an instance, a new user profile is created. That profile onlycontains the openID identity and two flags used for access control:
operator Means that the user is able to manage jobs, performing actions likecreating new jobs, cancelling them, etc.
admin Means that the user is able to manage users (granting or revokingoperator and admin rights) as well as job templates and other relatedinformation (see the the corresponding section).
Many of the operations in an openQA instance are not performed through the webinterface but using the REST-like API. The most obvious examples are theworkers and the scripts that fetch new versions of the operating system andschedule the corresponding tests. Those clients must be authorized by anoperator using anAPI key withan associated shared secret.
For that purpose, users with the operator flag have access in the web interfaceto a page that allows them to manage as many API keys as they may need. For everykey, a secret is automatically generated. The user can then configure theworkers or any other client application to use whatever pair of API key andsecret owned by him. Any client to the REST-like API using one of those API keyswill be considered to be acting on behalf of the associated user. So the API keynot only has to be correct and valid (not expired), it also has to belong to auser with operator rights.
For more insights about authentication, authorization and the technical detailsof the openQA security model, refer to thedetailedblog post about the subject by the openQA development team.
Job groups
A job can belong to a job group. Those job groups are displayed on the index pageand in the Job Groups menu on the navigation bar. From there the job group overviewpages can be accessed. Besides the test results the job group overview pages providea description about the job group and allow commenting.
Job groups have properties. These properties are mostly cleanup related. Theconfiguration can be done in the operators menu for job groups.
It is also possible to put job groups into categories. The nested groups will theninherit properties from the category. The categories are meant to combine job groupswith common builds so test results for the same build can be shown together onthe index page.
Cleanup
Important | openQA automatically deletes data that it considers "old" based ondifferent settings. For example job data is deleted from old jobs by the gru task. |
The following cleanup settings can be done on job-group-level:
size limit | Limits size of assets |
keep logs for | Specifies how long logs of a non-important job are retained afterit finished |
keep important logs for | How long logs of an important job are retained after itfinished |
keep results for | specifies How long results of a non-important job are retainedafter it finished |
keep important results for | How long results of an important job are retained afterit finished |
The defaults for those values are defined inlib/OpenQA/Schema/JobGroupDefaults.pm.
NOTE Deletion of job results includes deletion of logs and will cause the job tobe completely removed from the database.
NOTE Jobs which do not belong to a job group are currently not affected bythe mentioned cleanup properties.
Using the client script
Just as the worker uses an API key+secret every user of the client scriptmust do the same. The same API key+secret as previously created can be used ora new one created over the webUI.
The personal configuration should be stored in a file~/.config/openqa/client.conf
in the same format as previously described forthe client.conf, i.e. sections for each machine, e.g. localhost
.
Using job templates to automate jobs creation
The problem
When testing an operating system, especially when doing continuous testing,there is always a certain combination of jobs, each one with its ownsettings, that needs to be run for every revision. Those combinations can bedifferent for different 'flavors' of the same revision, like running a differentset of jobs for each architecture or for the Full and the Lite versions. Thiscombinational problem can go one step further if openQA is being used fordifferent kinds of tests, like running some simple pre-integration testsfor some snapshots combined with more comprehensive post-integration tests forrelease candidates.
This section describes how an instance of openQA can be configured using theoptions in the admin area to automatically create all the required jobs for eachrevision of your operating system that needs to be tested. If you are startingfrom scratch, you should probably go through the following order:
Define machines in 'Machines' menu
Define medium types (products) you have in 'Medium Types' menu
Specify various collections of tests you want to run in the 'Test suites'menu
Go to the template matrix in 'Job templates' menu and decide whatcombinations do make sense and need to be tested
Machines, mediums and test suites can all set various configuration variables.Job templates define how the test suites, mediums and machines should becombined in various ways to produce individual 'jobs'. All the variablesfrom the test suite, medium and machine for the 'job' are combined and madeavailable to the actual test code run by the 'job', along with variablesspecified as part of the job creation request. Certain variables also influenceopenQA’s and/or os-autoinst’s own behavior in terms of how it configures theenvironment for the job. Variables that influence os-autoinst’s behaviorare documented in the file doc/backend_vars.asciidoc in the os-autoinstrepository.
In openQA we can parametrize a test to describe for what product it willrun and for what kind of machines it will be executed. For example, atest like KDE can be run for any product that has KDE installed, andcan be tested in x86-64 and i586 machines. If we write this as atriples, we can create a list like this to characterize KDE tests:
(Product, Test Suite, Machine)(openSUSE-DVD-x86_64, KDE, 64bit)(openSUSE-DVD-x86_64, KDE, Laptop-64bit)(openSUSE-DVD-x86_64, KDE, USBBoot-64bit)(openSUSE-DVD-i586, KDE, 32bit)(openSUSE-DVD-i586, KDE, Laptop-32bit)(openSUSE-DVD-x86_64, KDE, USBBoot-32bit)(openSUSE-DVD-i586, KDE, 64bit)(openSUSE-DVD-i586, KDE, Laptop-64bit)(openSUSE-DVD-x86_64, KDE, USBBoot-64bit)
For every triplet, we need to configure a different instance ofos-autoinst with a different set of parameters.
Medium Types (products)
A medium type (product) in openQA is a simple description without any concretemeaning. It basically consists of a name and a set of variables thatdefine or characterize this product in os-autoinst.
Some example variables used by openSUSE are:
ISO_MAXSIZE contains the maximum size of the product. There is atest that checks that the current size of the product is less orequal than this variable.
DVD if it is set to 1, this indicates that the medium is a DVD.
LIVECD if it is set to 1, this indicates that the medium is a liveimage (can be a CD or USB)
GNOME this variable, if it is set to 1, indicates that it is a GNOMEonly distribution.
PROMO marks the promotional product.
RESCUECD is set to 1 for rescue CD images.
Test Suites
This is the form where we define the different tests that we created foropenQA. A test consists of a name, a priority and a set of variables that areused inside this particular test. The priority is used in the scheduler tochoose the next job. If multiple jobs are scheduled and their requirements forrunning them are fulfilled the ones with a lower value for the priority aretriggered. The id is the second sorting key: Of two jobs with equalrequirements and same priority the one with lower id is triggered first.
Some sample variables used by openSUSE are:
BTRFS if set, the file system will be BtrFS.
DESKTOP possible values are 'kde' 'gnome' 'lxde' 'xfce' or'textmode'. Used to indicate the desktop selected by the user duringthe test.
DOCRUN used for documentation tests.
DUALBOOT dual boot testing, needs HDD_1 and HDDVERSION.
ENCRYPT encrypt the home directory via YaST.
HDDVERSION used together with HDD_1 to set the operating systempreviously installed on the hard disk.
INSTALLONLY only basic installation.
INSTLANG installation language. Actually used only in documentationtests.
LIVETEST the test is on a live medium, do not install the distribution.
LVM select LVM volume manager.
NICEVIDEO used for rendering a result video for use in show rooms,skipping ugly and boring tests.
NOAUTOLOGIN unmark autologin in YaST
NUMDISKS total number of disks in QEMU.
REBOOTAFTERINSTALL if set to 1, will reboot after the installation.
SCREENSHOTINTERVAL used with NICEVIDEO to improve the video quality.
SPLITUSR a YaST configuration option.
TOGGLEHOME a YaST configuration option.
UPGRADE upgrade testing, need HDD_1 and HDDVERSION.
VIDEOMODE if the value is 'text', the installation will be done intext mode.
Some of the variables usually set in test suites that influence openQAand/or os-autoinst’s own behavior are:
HDDMODEL variable to set the HDD hardware model
HDDSIZEGB hard disk size in GB. Used together with BtrFS variable
HDD_1 path for the pre-created hard disk
RAIDLEVEL RAID configuration variable
QEMUVGA parameter to declare the video hardware configuration in QEMU
Machines
You need to have at least one machine set up to be able to run anytests. Those machines represent virtual machine types that you want totest. To make tests actually happen, you have to have an 'openQAworker' connected that can fulfill those specifications.
Name. User defined string - only needed for operator to identify the machineconfiguration.
Backend. What backend should be used for this machine. Recommended value isqemu as it is the most tested one, but other options (such as kvm2usb or vbox)are also possible.
Variables Most machine variables influence os-autoinst’s behavior in termsof how the test machine is set up. A few important examples:
QEMUCPU can be 'qemu32' or 'qemu64' and specifies the architecture of thevirtual CPU.
QEMUCPUS is an integer that specifies the number of cores you wish for.
LAPTOP if set to 1, QEMU will create a laptop profile.
USBBOOT when set to 1, the image will be loaded through anemulated USB stick.
Variable expansion
Any variable defined in Test Suite, Machine or Product table can refer to anothervariable using this syntax: %NAME%. When the test job is created, the stringwill be substituted with the value of the specified variable at that time.
For example this variable defined for Test Suite:
PUBLISH_HDD_1 = %DISTRI%-%VERSION%-%ARCH%-%DESKTOP%.qcow2
may be expanded to this job variable:
PUBLISH_HDD_1 = opensuse-13.1-i586-kde.qcow2
Variable precedence
It’s possible to define the same variable in multiple places that would all beused for a single job - for instance, you may have a variable defined in botha test suite and a product that appear in the same job template. The precedenceorder for variables is as follows (from lowest to highest):
Product
Machine
Test suite
API POST query parameters
That is, variable values set as part of the API request that triggers the jobs will'win' over values set at any of the other locations.
If you need to override this precedence - for example, you want the value set inone particular test suite to take precedence over a setting of the same value fromthe API request - you can add a leading + to the variable name. For instance, ifyou set +VARIABLE = foo in a test suite, and passed VARIABLE=bar in the APIrequest, the test suite setting would 'win' and the value would be foo.
If the same variable is set with a + prefix in multiple places, the same precedenceorder described above will apply to those settings.
Testing openSUSE or Fedora
An easy way to start using openQA is to start testing openSUSE or Fedora as theyhave everything setup and prepared to ease the initial deployment. If you wantto play deeper, you can configure the whole openQA manually from scratch, butthis document should help you to get started faster.
Getting tests
First you need to get actual tests. You can get openSUSE tests and needles (theexpected results) fromGitHub. It belongsinto the /var/lib/openqa/tests/opensuse directory. To make it easier, you can justrun
/usr/share/openqa/script/fetchneedles
Which will download the tests to the correct location and will set the correctrights as well.
Fedora’s tests are also in git. Touse them, you may do:
cd /var/lib/openqa/share/testsmkdir fedoracd fedoragit clone https://pagure.io/fedora-qa/os-autoinst-distri-fedora.git./templates --cleancd ..chown -R geekotest fedora/
Getting openQA configuration
To get everything configured to actually run the tests, there are plenty ofoptions to set in the admin interface. If you plan to test openSUSE Factory, usingtests mentioned in the previous section, the easiest way to get started is thefollowing command:
/var/lib/openqa/share/tests/opensuse/products/opensuse/templates [--apikey API_KEY] [--apisecret API_SECRET]
This will load some default settings that were used at some point of time inopenSUSE production openQA. Therefore those should work reasonably well withopenSUSE tests and needles. This script uses /usr/share/openqa/script/load_templates,consider reading its help page (--help) for documentation on possible extra arguments.
For Fedora, similarly, you can call:
/var/lib/openqa/share/tests/fedora/templates [--apikey API_KEY] [--apisecret API_SECRET]
Some Fedora tests require special hard disk images to be present in/var/lib/openqa/share/factory/hdd/fixed. The createhdds.py script in thecreatehddsrepository can be used to create these. See the documentation in that repofor more information.
Adding a new ISO to test
To start testing a new ISO put it in /var/lib/openqa/share/factory/iso and callthe following commands:
# Run the first test/usr/share/openqa/script/client isos post \ ISO=openSUSE-Factory-NET-x86_64-Build0053-Media.iso \ DISTRI=opensuse \ VERSION=Factory \ FLAVOR=NET \ ARCH=x86_64 \ BUILD=0053
If your openQA is not running on port 80 on 'localhost', you can add option--host=http://otherhost:9526 to specify a different port or host.
Warning | Use only the ISO filename in the 'client' command. You must place thefile in /var/lib/openqa/share/factory/iso. You cannot place the file elsewhere andspecify its path in the command. |
For Fedora, a sample run might be:
# Run the first test/usr/share/openqa/script/client isos post \ ISO=Fedora-Everything-boot-x86_64-Rawhide-20160308.n.0.iso \ DISTRI=fedora \ VERSION=Rawhide \ FLAVOR=Everything-boot-iso \ ARCH=x86_64 \ BUILD=Rawhide-20160308.n.0
More details on triggering tests can also be found in theUsers Guide.
Pitfalls
Take a look at Documented Pitfalls.
Introduction
openQA is an automated test tool that makes it possible to test the wholeinstallation process of an operating system. It’s free software releasedunder the GPLv2 license. Thesource code and documentation are hosted in theos-autoinst organization on GitHub.
This document provides the information needed to install and setup the tool, aswell as information useful for everyday administration of the system. It’sassumed that the reader is already familiar with openQA and has already read theStarter Guide, available at theofficial repository.
Installation
Keep in mind that there can be disruptive changes between openQA versions.You need to be sure that the webui and the worker that you are using have thesame version number or, at least, are compatible.
For example, the package distributed with openSUSE Leap 42.3 is not compatible with theversion on Tumbleweed.And the package distributed with Tumbleweed may not be compatible with theversion in the development package.
Installation from distribution packages
The easiest way to install openQA is from distribution packages.
For openSUSE, packages are available for Leap 42.3 and later.
For Fedora, packages are available in the official repositories for Fedora 23and later.
You can install the packages using these commands.
# openSUSE Leap 42.3+zypper in openQA# Fedora 23+dnf install openqa openqa-httpd
Installation from development versions of packages
You can find the development version of openQA in OBS in theopenQA:devel repository.
To add the development repository to your system, you can use these commands.
# openSUSE Tumbleweedzypper ar -f obs://devel:openQA/openSUSE_Tumbleweed devel-openQA# openSUSE Leap 42.3zypper ar -f obs://devel:openQA/openSUSE_Leap_42.3 devel-openQAzypper ar -f obs://devel:openQA:Leap:42.3/openSUSE_Leap_42.3 devel-openQA-perl-modules# openSUSE Leap 42.2zypper ar -f obs://devel:openQA/openSUSE_Leap_42.2 devel-openQAzypper ar -f obs://devel:openQA:Leap:42.2/openSUSE_Leap_42.2 devel-openQA-perl-modules
Then you can install them using this command.
# all openSUSEzypper in devel-openQA:openQA
Basic configuration
Apache proxy
It is required to run openQA behind an http proxy (apache, nginx, etc..). See theopenqa.conf.template config file in /etc/apache2/vhosts.d (openSUSE) or/etc/httpd/conf.d (Fedora). To make everything work correctly on openSUSE, youneed to enable the 'headers', 'proxy', 'proxy_http' and 'proxy_wstunnel' modulesusing the command 'a2enmod'. This is not necessary on Fedora.
# openSUSE Only# You can check what modules are enabled by using 'a2enmod -l'a2enmod headersa2enmod proxya2enmod proxy_httpa2enmod proxy_wstunnel
For a basic setup, you can copy openqa.conf.template to openqa.conf and modify the ServerName if requiredsetting. This will direct all HTTP traffic to openQA.
cp /etc/apache2/vhosts.d/openqa.conf.template /etc/apache2/vhosts.d/openqa.conf
TLS/SSL
By default openQA expects to be run with HTTPS. The openqa-ssl.conf.templateApache config file is available as a base for creating the Apache config; youcan copy it to openqa-ssl.conf and uncomment any lines you like, thenensure a key and certificate are installed to the appropriate location(depending on distribution and whether you uncommented the lines for key andcert location in the config file). On openSUSE, you should also add SSL to theAPACHE_SERVER_FLAGS so it looks like this in /etc/sysconfig/apache2:
APACHE_SERVER_FLAGS="SSL"
If you don’t have a TLS/SSL certificate for your host you must turn HTTPS off.You can do that in /etc/openqa/openqa.ini:
[openid]httpsonly = 0
Database
Since version 4.5.1512500474.437cc1c7 of openQA, PostgreSQL is used as thedatabase.
To configure access to the database in openQA, edit /etc/openqa/database.iniand change the settings in the [production] section.
The dsn value format technically depends on the database type and isdocumented for PostgreSQL atDBD::Pg
Example for connecting to local PostgreSQL database
[production]dsn = dbi:Pg:dbname=openqa
Example for connecting to remote PostgreSQL database
[production]dsn = dbi:Pg:dbname=openqa;host=db.example.orguser = openqapassword = somepassword
For older versions of openQA, you can migrate from SQLite to PostgreSQLaccording toDB migration from SQLite to PostgreSQL
Run the web UI
systemctl start postgresqlsystemctl start openqa-grusystemctl start openqa-webui# openSUSEsystemctl restart apache2# Fedora# for now this is necessary to allow Apache to connect to openQAsetsebool -P httpd_can_network_connect 1systemctl restart httpd
The openQA web UI should be available on http://localhost/ now. To ensureopenQA runs on each boot, you should also systemctl enable the same services.
systemctl enable postgresqlsystemctl enable openqa-grusystemctl enable openqa-webui
Run workers
Workers are processes running virtual machines to perform the actualtesting. They are distributed as a separate package and can be installed onmultiple machines but still using only one WebUI.
# openSUSEzypper in openQA-worker# Fedoradnf install openqa-worker
To allow workers to access your instance, you need to log into openQA asoperator and create a pair of API key and secret. Once you are logged in, in thetop right corner, is the user menu, follow the link 'manage API keys'. Clickthe 'create' button to generate key and secret. There is also a scriptavailable for creating an admin user and an API key+secret pairnon-interactively, /usr/share/openqa/script/create_admin, which can be usefulfor scripted deployments of openQA. Copy and paste the key and secret into/etc/openqa/client.conf on the machine(s) where the worker is installed. Makesure to put in a section reflecting your webserver URL. In the simplest case,your client.conf may look like this:
[localhost]key = 1234567890ABCDEFsecret = 1234567890ABCDEF
To start the workers you can use the provided systemd files via systemctlstart openqa-worker@1. This will start worker number one. You can start asmany workers as you dare, you just need to supply different 'worker id' (numberafter @).
You can also run workers manually from command line.
install -d -m 0755 -o _openqa-worker /var/lib/openqa/pool/Xsudo -u _openqa-worker /usr/share/openqa/script/worker --instance X
This will run a worker manually showing you debug output. If you haven’tinstalled 'os-autoinst' from packages make sure to pass --isotovideo optionto point to the checkout dir where isotovideo is, not to /usr/lib! Otherwiseit will have trouble finding its perl modules.
User authentication
OpenQA supports three different authentication methods - OpenID (default), iChainand Fake. See auth section in /etc/openqa/openqa.ini.
[auth]# method name is case sensitive!method = OpenID|iChain|Fake
Independently of method used, the first user that logs in (if there is no admin yet)will automatically get administrator rights!
OpenID
By default openQA uses OpenID with opensuse.org as OpenID provider.OpenID method has its own openid section in /etc/openqa/openqa.ini:
[openid]## base url for openid providerprovider = https://www.opensuse.org/openid/user/## enforce redirect back to httpshttpsonly = 1
OpenQA supports only OpenID version up to 2.0. Newer OpenID-Connect and OAuth isnot supported currently.
iChain
Use only if you use iChain (NetIQ Access Manager) proxy on your hosting server.
Fake
For development purposes only! Fake authentication bypass any authentication andautomatically allow any login requests as 'Demo user' with administrator privilegesand without password. To ease worker testing, API key and secret is created (or updated)with validity of one day during login.You can then use following as /etc/openqa/client.conf:
[localhost]key = 1234567890ABCDEFsecret = 1234567890ABCDEF
If you switch authentication method from Fake to any other, review your API keys!You may be vulnerable for up to a day until Fake API key expires.
Where to now?
From this point on, you can refer to the Getting Started guide tofetch the tests cases and possibly take a look at Test Developer Guide
Advanced configuration
Setting up git support
Editing needles from web can optionally commit new or changed needlesautomatically to git. To do so, you need to enable git support by setting
[global]scm = git
in /etc/openqa/openqa.ini. Once you do so and restart the web interface, openQA willautomatically commit new needles to the git repository.
You may want to add some description to automatic commits comingfrom the web UI.You can do so by setting your configuration in the repository(/var/lib/os-autoinst/needles/.git/config) to some reasonable defaults such as:
[user]email = whatever@example.comname = openQA web UI
To enable automatic pushing of the repo as well, you need to add the followingto your openqa.ini:
[scm git]do_push = yes
Depending on your setup, you might need to generate and propagatessh keys for user 'geekotest' to be able to push.
Referer settings to auto-mark important jobs
Automatic cleanup of old results (see GRU jobs) can sometimes render importanttests useless. For example bug report with link to openQA job which no longerexists. Job can be manually marked as important to prevent quick cleanup orreferer can be set so when job is accessed from particular web page (forexample bugzilla), this job is automatically labeled as linked and treated asimportant.
List of recognized referers is space separated list configured in/etc/openqa/openqa.ini:
[global]recognized_referers = bugzilla.suse.com bugzilla.opensuse.org
Worker settings
Default behavior for all workers is to use the 'Qemu' backend and connect to'http://localhost'. If you want to change some of those options, you can do soin /etc/openqa/workers.ini. For example to point the workers to the FQDN ofyour host (needed if test cases need to access files of the host) use thefollowing setting:
[global]HOST = http://openqa.example.com
Once you got workers running they should show up in the admin section of openQA inthe workers section as 'idle'. When you get so far, you have your own instanceof openQA up and running and all that is left is to set up some tests.
Configuring remote workers
There are some additional requirements to get remote worker running. First is toensure shared storage between openQA WebUI and workers.Directory /var/lib/openqa/share contains all required data and should beshared with read-write access across all nodes present in openQA cluster.This step is intentionally left on system administrator to choose proper sharedstorage for her specific needs.
Example of NFS configuration:NFS server is where openQA WebUI is running. Content of /etc/exports
/var/lib/openqa/share *(fsid=0,rw,no_root_squash,sync,no_subtree_check)
NFS clients are where openQA workers are running. Run following command:
mount -t nfs openQA-webUI-host:/var/lib/openqa/share /var/lib/openqa/share
Configuring worker to use more than one openQA server
When there are multiple openQA web interfaces (openQA instances) available a workercan be configured to register and accept jobs from all of them.
Requirements:
/etc/openqa/client.conf must contain API keys and secrets to all instances
Shared storage from all instances must be properly mounted
In the /etc/openqa/workers.ini enter space-separated instance hosts and optionallyconfigure where the shared storage is mounted. Example:
[global]HOSTS = openqa.opensuse.org openqa.fedora.fedoraproject.org[openqa.opensuse.org]SHARE_DIRECTORY = /var/lib/openqa/opensuse[openqa.fedoraproject.org]SHARE_DIRECTORY = /var/lib/openqa/fedora
Configuring SHARE_DIRECTORY is not a hard requirement. Worker will try followingdirectories prior registering with openQA instance:
SHARE_DIRECTORY
/var/lib/openqa/$instance_host
/var/lib/openqa/share
/var/lib/openqa
fail if none of above is available
Once worker registers to openQA instance it checks for available job and startsaccepting websockets commands. Worker accepts jobs as they will come in, thereis no priority, or other ordering, support at the moment.It is possible to mix local openQA instance with remote instances or use onlyremote instances.
Asset Caching
If your network is slow or you experience long time to load needles youmight want to consider to enable caching in your remote workers. To enable caching,/var/lib/openqa/cache must be created, and right permissions given to the'geekotest' user. If you install openQA through the repositories, said directorywill be created for you.
In the /etc/openqa/workers.ini
[global]HOST=http://webuiCACHEDIRECTORY = $cache_locationCACHELIMIT = 50 # This is currently noop. Cache supports limiting, but is not enabled.[http://webui]TESTPOOLSERVER = rsync://yourlocation/tests
This will allow the workers to download the assets from the webUI and use themlocally. If TESTPOOLSERVER is set tests and needles will also be cached by theworker.
Auditing - tracking openQA changes
Auditing plugin enables openQA administrators to maintain overview about what is happening with the system.Plugin records what event was triggered by whom, when and what the request looked like. Actions done by openQAworkers are tracked under user whose API keys are workers using.
Audit log is directly accessible from Admin menu.
Auditing, by default enabled, can be disabled by global configuration option in /etc/openqa/openqa.ini:
[global]audit_enabled = 0
The audit section of /etc/openqa/openqa.ini allows to exclude some events from logging usinga space separated blacklist:
[audit]blacklist = job_grab job_done
List of events tracked by the auditing plugin:
Assets:
asset_register asset_delete
Workers:
worker_register command_enqueue
Jobs:
iso_create iso_delete iso_cancel
jobtemplate_create jobtemplate_delete
job_create job_grab job_delete job_update_result job_done jobs_restart job_restart job_cancel job_duplicate
jobgroup_create jobgroup_connect
Tables:
table_create table_update table_delete
Users:
user_new_comment user_update_comment user_delete_comment user_login
Needles:
needle_delete needle_modify
Some of these events are very common and may clutter audit database. For this reason job_grab and job_doneevents are blacklisted by default.
Note | Upgrading openQA does not automatically update /etc/openqa/openqa.ini. Review your configuration after upgrade. |
Filesystem Layout
The openQA web interface can be started via MOJO_REVERSE_PROXY=1 morbo script/openqa indevelopment mode.
/var/lib/openqa/ must be owned by root and contain several subdirectories, most of which must be owned by the user that runs openQA(default 'geekotest'):
db contains the database lockfile
images is where the server stores test screenshots and thumbnails
share contains shared directories for remote workers, can be owned by root
share/factory contains test assets and temp directory, can be owned by root but sysadmin must create subdirs
share/factory/iso and share/factory/iso/fixed contain ISOs for tests
share/factory/hdd and share/factory/hdd/fixed contain hard disk images for tests
share/factory/repo and share/factory/repo/fixed contain repositories for tests
share/factory/other and share/factory/other/fixed contain miscellaneous test assets (e.g. kernels and initrds)
share/factory/tmp is used as a temporary directory (openQA will create it if it owns share/factory)
share/tests contains the tests themselves
testresults is where the server stores test logs and test-generated assets
Each of the asset directories (factory/iso, factory/hdd, factory/repo andfactory/other) may contain a fixed/ subdirectory, and assets of the sametype may be placed in that directory. Placing an asset in the fixed/subdirectory indicates that it should not be deleted to save space: the GRUtask which removes old assets when the size of all assets for a given jobgroup is above a specified size will ignore assets in the fixed/subdirectories.
It also contains several symlinks which are necessary due to various thingsmoving around over the course of openQA’s development. All the symlinkscan of course be owned by root:
script (symlink to /usr/share/openqa/script/)
tests (symlink to share/tests)
factory (symlink to share/factory)
It is always best to use the canonical locations, not the compatibilitysymlinks - so run scripts from /usr/share/openqa/script, not/var/lib/openqa/script.
You only need the asset directories for the asset types you will actually use,e.g. if none of your tests refer to openQA-stored repositories, you will needno factory/repo directory. The distribution packages may not create allasset directories, so make sure the ones you need are created if necessary.Packages will likewise usually not contain any tests; you must create yourown tests, or use existing tests for some distribution or other piece ofsoftware.
The worker needs to own /var/lib/openqa/pool/$INSTANCE, e.g.
/var/lib/openqa/pool/1
/var/lib/openqa/pool/2
…. - add more if you have more CPUs/disks
You can also give the whole pool directory to the _openqa-worker user and letthe workers create their own instance directories.
Troubleshooting
Tests fail quickly
Check the log files in /var/lib/openqa/testresults
KVM doesn’t work
make sure you have a machine with kvm support
make sure kvm_intel or kvm_amd modules are loaded
make sure you do have virtualization enabled in BIOS
make sure the '_openqa-worker' user can access /dev/kvm
make sure you are not already running other hypervisors such as VirtualBox
when running inside a vm make sure nested virtualization is enabled (pass nested=1 to your kvm module)
openid login times out
www.opensuse.org’s openid provider may have trouble with IPv6. openQA shows a message like this:
no_identity_server: Could not determine ID provider from URL.
To avoid that switch off IPv6 or add a special route that prevents the systemfrom trying to use IPv6 with www.opensuse.org:
ip -6 r a to unreachable 2620:113:8044:66:130:57:66:6/128
Introduction
This document provides additional information for use of the web interface orthe REST API as well as administration information.For administrators it is recommend to have read theInstallation Guide first to understand the structureof components as well as the configuration of an installed instance.
Use of the web interface
In general the web UI should be intuitive or self-explanatory. Look out for thelittle blue help icons and click them for detailed help on specific sections.
Some pages use queries to select what should be shown. The query parameters aregenerated on clickable links, for example starting from the index page or thegroup overview page clicking on single builds. On the query pages there can beUI elements to control the parameters, for example to look for more olderbuilds or only show failed jobs or other settings. Additionally, the queryparameters can be tweaked by hand if you want to provide a link to specificviews.
/tests/overview - Customizable test overview page
The overview page is configurable by the filter box. Also, some additionalquery parameters can be provided which can be considered advanced orexperimental. For example specifying no build will resolve the latest buildwhich matches the other parameters specified. Specifying no group will showall jobs from all matching job groups. Also specifying multiple groups works,see the following example.
Figure 2. The openQA test overview page showing multiple groups at once. The URL query parameters specify the groupid parameter two times to resolve both the "opensuse" and "opensuse test" group.
Specifying multiple groups with no build will yield the latest build of thefirst group. This can be useful to have a static URL for bookmarking.
Description of test suites
Test suites can be described using API commands or the admin table for any operator using the web UI.
Figure 3. Entering a test suite description in the admin table using the web interface:
If a description is defined, the name of the test suite on the tests overview page shows up as a link. Clicking the link will show the description in a popup. The same syntax as for comments can be used, that is Markdown with custom extensions such as shortened links to ticket systems.
Figure 4. popover in test overview with content as configured in the test suites database:
Review badges
Based on comments in the individual job results for each build a certificateicon is shown on the group overview page as well as the index page to indicatethat every failure has been reviewed, e.g. a bug reference or a test issuereason is stated:
Meaning of the different colors
The green icons shows up when there is no work to be done.
No icon is shown if at least one failure still need to be reviewed.
The black icon is shown if all review work has been done.
(To simplify, checking for false-negatives is not considered here.)
Show bug or label icon on overview if labeled gh#550
Show bug icon with URL if mentioned in test comments
Show bug or label icon on overview if labeled
For bugreferences write <bugtracker_shortname>#<bug_nr>
in a comment, e.g. "bsc#1234", for generic labels use label:<keyword>
where <keyword>
can be any valid character up to the next whitespace, e.g. "false_positive". The keywords are not defined within openQA itself. A valid list of keywords should be decided upon within each project or environment of one openQA instance.
Figure 5. Example for a generic label
Figure 6. Example for bug label
Related issue: #10212
'Hint:' You can also write (or copy-paste) full links to bugs and issues. The links are automatically changed to the shortlinks (e.g. https://progress.opensuse.org/issues/11110
turns into poo#11110). Related issue: poo#11110
Also github pull requests and issues can be linked using the generic format`<marker>[#<project/repo>]#<id>`, e.g. gh#os-autoinst/openQA#1234, see gh#973
All issue references are stored within the internal database of openQA. The status can be updated using the /bugs
API route for example using external tools.
Figure 7. Example for visualization of closed issue references. Upside down icons in red visualize closed issues.
Build tagging
Tag builds with special comments on group overview
Based on comments on the group overview individual builds can be tagged. As'build' by themselves do not own any data the job group is used to store thisinformation. A tag has a build to link it to a build. It also has a typeand an optional description. The type can later on be used to distinguishtag types.
The generic format for tags is
tag:<build_id>:<type>[:<description>], e.g. tag:1234:important:Beta1.
The more recent tag always wins.
A 'tag' icon is shown next to tagged builds together with the description onthe group_overview page. The index page does not show tags by default to preventa potential performance regression. Tags can be enabled on the index page using thecorresponding option in the filter form at the bottom of the page.
Keeping important builds
As builds can now be tagged we come up with the convention that the'important' type - the only one for now - is used to tag every job thatcorresponds to a build as 'important' and keep the logs for these jobs longer so thatwe can always refer to the attached data, e.g. for milestone builds, finalreleases, jobs for which long-lasting bug reports exist, etc.
Filtering test results and builds
At the top of the test results overview page is a form which allows filtering tests by result,architecture and TODO-status.
There is also a similar form at the bottom of the index page which allows filtering builds bygroup and customizing the limits.
Highlighting job dependencies in 'All tests' table
When hovering over the branch icon after the test name children of the job willbe highlighted blue and parents red. So far this only works for jobs displayed onthe same page of the table.
Use of the REST API
openQA includes a client script which - depending on the distribution - ispackaged independantly if you just want to interface with an existing openQAinstance without needing to install the full package. Call<openqa-folder>/script/client --help
for help (openSUSE: openqa-client--help
).
Basics are described in theGetting Started guide.
Triggering tests
Tests can be triggered over multiple ways, using clone_job.pl
, jobs post
,isos post
as well as retriggering existing jobs or whole media over the webUI.
Cloning existing jobs - clone_job.pl
If one wants to recreate an existing job from any publically available openQAinstance the script clone_job.pl
can be used to copy the necessary settingsand assets to another instance and schedule the test. For the test to beexecuted it has to be ensured that matching ressources can be found, forexample a worker with matching WORKER_CLASS
must be registered. More detailson clone_job.pl
can be found in Writing Tests.
Spawning single new jobs - jobs post
Single jobs can be spawned using the jobs post
API route. All necessarysettings on a job must be supplied in the API request. The "openQA client" hasexamples for this.
Spawning multiple jobs based on templates - isos post
The most common way of spawning jobs on production instances is using theisos post
API route. Based on previously defined settings for media, jobgroups, machines and test suites jobs are triggered based on templatematching. The Getting Started guide alreadymentioned examples. Additionally to the necessary template matching parametersmore parameters can be specified which are forwarded to all triggered jobs.There are also special parameters which only have an influence on the way thetriggering itself is done. These parameters all start with a leadingunderscore but are set as request parameters in the same way as the otherparameters.
The following scheduling parameters exist
_NO_OBSOLETE | Do not obsolete jobs in older builds with same DISTRI and VERSION(as is the default behavior). With this option jobs which are currently pending,for example scheduled or running, are not cancelled when a new medium is triggered. |
_DEPRIORITIZEBUILD | Setting this switch '1' will not immediately obsolete jobs of oldbuilds but rather deprioritize them up to a configurable limit of priority. |
_DEPRIORITIZE_LIMIT | The configurable limit of priority up to which jobsshould be deprioritized. Needs |
_ONLY_OBSOLETE_SAME_BUILD | Only obsolete (or deprioritize) jobs for the same BUILD.This is useful for cases where a new build appearing doesn’t necessarilymean existing jobs for earlier builds with the same DISTRI and VERSION areno longer interesting, but you still want to be able to re-submit jobs for abuild and have existing jobs for the exact same build obsoleted. |
Example for _DEPRIORITIZEBUILD
and _DEPRIORITIZE_LIMIT
.
openqa-client isos post ISO=my_iso.iso DISTRI=my_distri FLAVOR=sweet \ ARCH=my_arch VERSION=42 BUILD=1234 \ _DEPRIORITIZEBUILD=1 _DEPRIORITIZE_LIMIT=120 \
Where to now?
For test developers it is recommended to continue with theTest Developer Guide.
Introduction
openQA is an automated test tool that makes it possible to test the wholeinstallation process of an operating system. It’s free software releasedunder the GPLv2 license. Thesource code and documentation are hosted in theos-autoinst organization on GitHub.
This document provides the information needed to start developing new tests foropenQA or to improve the existing ones. It’sassumed that the reader is already familiar with openQA and has already read theStarter Guide, available at theofficial repository.
Basic
This section explains the basic layout of openQA tests and the API available in tests.openQA tests are written in the Perl programming language. Some basic but noin-depth knowledge of Perl is needed. This document assumes that the readeris already familiar with Perl.
API
os-autoinst provides the API for the tests using the os-autoinst backend, you cantake a look to the published documentation at http://open.qa/api/testapi/.
How to write tests
openQA tests need to implement at least the run subroutine tocontain the actual test code and the test needs to be loaded in the distribution’smain.pm.
The test_flags subroutine specifies what happens when the testfails.
There are several callbacks defined:
post_fail_hook is called to upload log files or determine the state ofthe machine
pre_run_hook is called before the run function - mainly useful for a wholegroup of tests
post_run_hook is run after successful run function - mainly useful for a wholegroup of tests
The following example is a basic test that assumes some live imagethat boots into the desktop when pressing enter at the boot loader:
use base "basetest";use strict;use testapi;sub run { # wait for bootloader to appear # with a timeout explicitly lower than the default because # the bootloader screen will timeout itself assert_screen "bootloader", 15; # press enter to boot right away send_key "ret"; # wait for the desktop to appear assert_screen "desktop", 300;}sub test_flags { # 'fatal' - abort whole test suite if this fails (and set overall state 'failed') # 'ignore_failure' - if this module fails, it will not affect the overall result at all # 'milestone' - after this test succeeds, update 'lastgood' # 'norollback' - don't roll back to 'lastgood' snapshot if this fails return { fatal => 1 };}1;
Test Case Examples
Example: Console test that installs software from remote repository via zypper command
sub run() { # change to root become_root; # output zypper repos to the serial script_run "zypper lr -d > /dev/$serialdev"; # install xdelta and check that the installation was successful assert_script_run 'zypper --gpg-auto-import-keys -n in xdelta'; # additionally write a custom string to serial port for later checking script_run "echo 'xdelta_installed' > /dev/$serialdev"; # detecting whether 'xdelta_installed' appears in the serial within 200 seconds die "we could not see expected output" unless wait_serial "xdelta_installed", 200; # capture a screenshot and compare with needle 'test-zypper_in' assert_screen 'test-zypper_in';}
Example: Typical X11 test testing kate
sub run() { # make sure kate was installed # if not ensure_installed will try to install it ensure_installed 'kate'; # start kate x11_start_program 'kate'; # check that kate execution succeeded assert_screen 'kate-welcome_window'; # close kate's welcome window and wait for the window to disappear before # continuing wait_screen_change { send_key 'alt-c' }; # typing a string in the editor window of kate type_string "If you can see this text kate is working.\n"; # check the result assert_screen 'kate-text_shown'; # quit kate send_key 'ctrl-q'; # make sure kate was closed assert_screen 'desktop';}
Variables
Test case behavior can be controlled via variables. Some basicvariables like DISTRI, VERSION, ARCH are always set.Others like DESKTOP are defined by the 'Test suites' in the openQAweb UI.Check the existing tests atos-autoinst-distri-opensuseon GitHub for examples.
Variables are accessible via the get_var and check_var functions.
Test Development tricks
Modifying setting of an existing test
There is no interface to modify existing tests but the clone_job.pl scriptcan be used to create a new job that adds, removes or changessettings. This script is located at /usr/share/openqa/script/.
/usr/share/openqa/script/clone_job.pl --from localhost --host localhost 42 FOO=bar BAZ=
If you do not want a cloned job to start up in the same job group as the jobyou cloned from, e.g. to not pollute build results you the job group can beoverwritten, too, using the special variable _GROUP. Add the quoted groupname, e.g.:
clone_job.pl --from localhost 42 _GROUP="openSUSE Tumbleweed"
The special group value 0 means that the group connection will be separatedand the job will not appear as a job in any job group, e.g.:
clone_job.pl --from localhost 42 _GROUP=0
Using snapshots to speed up development of tests
For lower turn-around times during test development based on virtual machinesthe QEMU backend provides a feature that allows a job to start from asnapshot which can help in this situation.
Depending on the use case, there are two options to help:
Create and preserve snapshots for every test module run (MAKETESTSNAPSHOTS)
Offers more flexibility as the test can be resumed almost at any point.However disk space requirements are high (expect more than 30GB for onejob)
This mode is useful for fixing non-fatal issues in tests and debugging SUTas more than just the snapshot of the last failed module is saved.
Create a snapshot after every successful test module while alwaysoverwriting the existing snapshot to preserve only the latest (TESTDEBUG)
Allows to skip just before the start of the first failed test module,which can be limiting, but preserves disk space in comparison toMAKETESTSNAPSHOTS.
This mode is useful for iterative test development
In both modes there is no need to modify tests (i.e. adding milestone testflag as the behaviour is implied). In the later mode every test module isalso considered fatal. This means the job is aborted after the first failedtest module.
Enable snapshots for each module
Run the worker with --no-cleanup parameter. This will preserve the harddisks after test runs.
Set MAKETESTSNAPSHOTS=1 on a job. This will make openQA save asnapshot for every test module run. One way to do that is by cloning anexisting job and adding the setting:
clone_job.pl --from https://openqa.opensuse.org --host localhost 24 MAKETESTSNAPSHOTS=1
Create a job again, this time setting the SKIPTO variable to the snapshot you need. Again, clone_job.pl comes handy here:
clone_job.pl --from https://openqa.opensuse.org --host localhost 24 SKIPTO=consoletest-yast2_i
Use qemu-img snapshot -l something.img to find out what snapshots are in the image. Snapshots are named
"test module category"-"test module name"
(e.g.installation-start_install
).
Storing only the last sucessful snapshot
Run the worker with --no-cleanup parameter. This will preserve the hard disks after test runs.
Set TESTDEBUG=1 on a job. This will make openQA save a snapshot after eachsuccessful test module run. Snapshots are overwritten. The snapshot is named
lastgood
in all cases.
clone_job.pl --from https://openqa.opensuse.org --host localhost 24 TESTDEBUG=1
Create a job again, this time setting the SKIPTO variable to the snapshotwhich failed on previous run. Make sure the new job will also haveTESTDEBUG=1 set. This can be ensured by the use of the clone_job script onthe clone source job or specifying the variable explicitly:
clone_job.pl --from https://openqa.opensuse.org --host localhost 24 TESTDEBUG=1 SKIPTO=consoletest-yast2_i
Assigning jobs to workers
By default, any worker can get any job with the matching architecture.
This behavior can be changed by setting job variable WORKER_CLASS. Jobswith this variable set (typically via machines or test suites configuration) areassigned only to workers, which have the same variable in the configuration file.
For example, the following configuration ensures, that jobs with WORKER_CLASS=desktopcan be assigned only to worker instances 1 and 2.
File: workers.ini
[1]WORKER_CLASS = desktop[2]WORKER_CLASS = desktop[3]# WORKER_CLASS is not set
Writing multi-machine tests
Scenarios requiring more than one system under test (SUT), like High Availability testing, are covered as multi-machine tests (MM tests) in this section.
OpenQA approaches multi-machine testing by assigning dependencies between individual jobs. This means the following:
everything needed for MM tests must be running as a test job (or you are on your own), even support infrastructure (custom DHCP, NFS,etc. if required), which in principle is not part of the actual testing, must have a defined test suite so a test job can be created
OpenQA scheduler makes sure tests are started as a group and in right order, cancelled as a group if some dependencies are violated and cloned asa group if requested.
OpenQA does not synchronize individual steps of the tests.
OpenQA provides locking server for basic synchronization of tests (e.g. wait until services are ready for failover), but the correct usage of locks istest designer job (beware deadlocks).
In short, writing multi-machine tests adds a few more layers of complexity:
documenting the dependencies and order between individual tests
synchronization between individual tests
actual technical realization (i.e. custom networking)
Job dependencies
There are 2 types of dependencies: CHAINED and PARALLEL:
CHAINED describes when one test case depends on another and both are run sequentially, i.e. KDE test suite is run after and only after Installation test suiteis successfully finished and cancelled if fail.
To define CHAINED dependency add variable START_AFTER_TEST with the name(s) of test suite(s) after which selected test suite is supposed to run.Use comma separated list for multiple test suite dependency. E.g. START_AFTER_TEST="kde,dhcp-server"
PARALLEL describes MM test, test suites are scheduled to run at the same time and managed as a group. On top of that, PARALLEL also describestest suites dependencies, where some test suites (children) run parallel with other test suites (parents) only when parents are running.
To define PARALLEL dependency, use PARALLEL_WITH variable with the name(s) of test suite(s) which acts as a parent suite(s) to selected test suite.In other words, PARALLEL_WITH describes "I need this test suite to be running during my run". Use comma separated list for multiple test suite dependency. E.g. PARALLEL_WITH="web-server,dhcp-server"Keep in mind that parent job must be running until all children finish, else scheduler will cancel child jobs once parent is done.
Job dependencies are only resolved when using the iso controller tocreate new jobs from job templates. Posting individual jobs manuallywon’t work.
Job dependencies are currently only possible between tests that arescheduled for the same machine.
OpenQA worker requirements
CHAINED dependency requires only one worker, since dependent jobs will run only after the first one finish.On the other hand PARALLEL dependency requires at least 2 workers for simple scenarios.
Examples:
Listing 1. CHAINED - i.e. test basic functionality before going advanced - requires 1 worker
A <- B <- CDefine test suite A,then define B with variable START_AFTER_TEST=A and then define C with START_AFTER_TEST=B-or-Define test suite A, Band then define C with START_AFTER_TEST=A,BIn this case however the start order of A and B is not specified.But C will start only after A, B are successfully done.
Listing 2. PARALLEL basic High-Availability
A^BDefine test suite Aand then define B with variable PARALLEL_WITH=A.A in this case is parent test suite to B and must be running throughout B run.
Listing 3. PARALLEL with multiple parents - i.e. complex support requirements for one test - requires 4 workers
A B C\ | / ^ DDefine test suites A,B,Cand then define D with PARALLEL_WITH=A,B,C.A,B,C run in parallel and are parent test suites for D and all must run until D finish.
Listing 4. PARALLEL with one parent - i.e. running independent tests against one server - requires at least 2 workers
A ^ /|\ B C DDefine test suite Aand then define B,C,D with PARALLEL_WITH=AA is parent test suite for B, C, D (all can run in parallel).Children B, C, D can run and finish anytime, but A must run until all B, C, D finishes.
Test synchronization and locking API
OpenQA provides locking server through lock API. To use lock API import lockapi package (use lockapi;) in your test file.Lock API provides three functions: mutex_create, mutex_lock, mutex_unlock. Each of these functions take one parameter: name of the lock.Locks are associated with caller`s job - locks can’t be unlocked by different job then the one who locked the lock.
mutex_lock tries to lock the mutex lock for caller`s job. If lock is unavailable or locked by someone else, mutex_lock call blocks.
mutex_unlock tries to unlock the mutex lock. If lock is locked by different job, mutex_unlock call blocks. When lock become available or if lock does not exist, callreturns without doing anything.
mutex_create create new mutex lock. When lock is created by mutex_create, lock is automatically unlocked. When mutex lock already exists call returns without doing anything.
Locks are addressed by their name. This name is valid in test group defined by their dependencies. If there are more groups running at thesame time and the same lock name is used, these locks are independent of each other.
The mmapi package provides wait_for_children, which the parent can use to wait for the children to complete.
Example of mmapi: Parent JobWait until login prompt appear, assume services are started
use base "basetest";use strict;use testapi;use lockapi;use mmapi;sub run { assert_screen 'bootloader'; assert_screen 'login', 300; # services start automatically # unlock by creating the lock mutex_create('services_ready'); # wait until all children finish wait_for_children;}
Example of mmapi: Child jobCheck until parent is ready, then start testing services
use base "basetest";use strict;use testapi;use lockapi;sub run { assert_screen 'bootloader'; assert_screen 'login', 300; # this blocks until lock is created then locks and immediately unlocks mutex_lock('services_ready'); mutex_unlock('services_ready'); # login to continue type_string("root\n"); sleep 1; type_string("secret\n");}
Sometimes it is useful to let a parent wait for certain action on a child, for example to verifyserver state after completed request. In this scenario the child createsa mutex and the parent unlocks it.
The child can however die at any time. To prevent parent deadlock in this situation,parent has to pass child ID as a second parameter to mutex_lock(). If a child jobwith given ID already finished, mutex_lock() calls die.
Example of mmapi: Parent JobWait until the child reaches given point
use base "basetest";use strict;use testapi;use lockapi;use mmapi;sub run { my $children = get_children(); # let's suppose there is only one child my $child_id = (keys %$children)[0]; # this blocks until lock is available and then does nothing mutex_unlock('child_reached_given_point', $child_id); # continue with the test}
Getting information about parents and children
Example of mmapi: Getting info about parents / children
use base "basetest";use strict;use testapi;use mmapi;sub run { # returns a hash ref containing (id => state) for all children my $children = get_children(); for my $job_id (keys %$children) { print "$job_id is cancelled\n" if $children->{$job_id} eq 'cancelled'; } # returns an array with parent ids, all parents are in running state (see Job dependencies above) my $parents = get_parents(); # let's suppose there is only one parent my $parent_id = $parents->[0]; # any job id can be queried for details with get_job_info() # it returns a hash ref containing these keys: # name priority state result worker_id # retry_avbl t_started t_finished test # group_id group settings my $parent_info = get_job_info($parent_id); # it is possible to query variables set by openqa frontend, # this does not work for variables set by backend or by the job at runtime my $parent_name = $parent_info->{settings}->{NAME} my $parent_desktop = $parent_info->{settings}->{DESKTOP} # !!! this does not work, VNC is set by backend !!! # my $parent_vnc = $parent_info->{settings}->{VNC}}
Support Server based tests
The idea is to have a dedicated "helper server" to allow advanced network based testing.
Support server takes advantage of the basic parallel setup as described in the previous section, with the support server being the parent test 'A' and the test needing it being the child test 'B'. This ensures that the test 'B' always have the support server available.
Preparing the supportserver:
The support server image is created by calling a special test, based on the autoyast test:
/usr/share/openqa/script/client jobs post DISTRI=opensuse VERSION=13.2 \ ISO=openSUSE-13.2-DVD-x86_64.iso ARCH=x86_64 FLAVOR=Server-DVD \ TEST=supportserver_generator MACHINE=64bit DESKTOP=textmode INSTALLONLY=1 \ AUTOYAST=supportserver/autoyast_supportserver.xml SUPPORT_SERVER_GENERATOR=1 \ PUBLISH_HDD_1=supportserver.qcow2
This produces qemu image 'supportserver.qcow2' that contains the supportserver. The 'autoyast_supportserver.xml'should define correct user and password, as well as packages and the common configuration.
More specific role the supportserver should take is then selected when the server is run in the actual test scenario.
Using the supportserver:
In the Test suites, the supportserver is defined by setting:
HDD_1=supportserver.qcow2SUPPORT_SERVER=1SUPPORT_SERVER_ROLES=pxe,qemuproxyWORKER_CLASS=server,qemu_autoyast_tap_64
where the SUPPORT_SERVER_ROLES defines the specific role (see code in 'tests/support_server/setup.pm' for available roles and their definition), and HDD_1 variable must be the name of the supportserver image as defined via PUBLISH_HDD_1 variable during supportserver generation. If the supportserver is based on older SUSE versions (opensuse 11.x, SLE11SP4..) it may also be needed to add HDDMODEL=virtio-blk. In case of qemu backend, one canalso use BOOTFROM=c, for faster boot directly from the HDD_1 image.
Then for the 'child' test using this supportserver, the following additional variable must be set:PARALLEL_WITH=supportserver-pxe-tftpwhere 'supportserver-pxe-tftp' is the name given to the supportserver in the test suites screen.Once the tests are defined, they can be added to openQA in the usual way:
/usr/share/openqa/script/client isos post DISTRI=opensuse VERSION=13.2 \ ISO=openSUSE-13.2-DVD-x86_64.iso ARCH=x86_64 FLAVOR=Server-DVD
where the DISTRI, VERSION, FLAVOR and ARCH correspond to the job group containing the tests.Note that the networking is provided by tap devices, so both jobs should run on machines defined by (apart from others) having NICTYPE=tap, WORKER_CLASS=qemu_autoyast_tap_64.
Example of Support Server: a simple tftp test
Let’s assume that we want to test tftp client operation. For this, we setup the supportserver as a tftp server:
HDD_1=supportserver.qcow2SUPPORT_SERVER=1SUPPORT_SERVER_ROLES=dhcp,tftpWORKER_CLASS=server,qemu_autoyast_tap_64
With a test-suites name supportserver-opensuse-tftp.
The actual test 'child' job, will then have to set PARALLEL_WITH=supportserver-opensuse-tftp, and also other variables according to the test requirements. For convenience, we have also started a dhcp server on the supportserver, but even without it, network could be set up manually by assigning a free ip address (e.g. 10.0.2.15) on the system of the test job.
Example of Support Server: The code in the *.pm module doing the actual tftp test could then look something like the example below
use strict;use base 'basetest';use testapi;sub run { my $script="set -e -x\n"; $script.="echo test >test.txt\n"; $script.="time tftp ".$server_ip." -c put test.txt test2.txt\n"; $script.="time tftp ".$server_ip." -c get test2.txt\n"; $script.="diff -u test.txt test2.txt\n"; script_output($script);}
assuming of course, that the tested machine was already set up with necessary infrastructure for tftp, e.g. network was set up, tftp rpm installed and tftp service started, etc. All of this could be conveniently achieved using the autoyast installation, as shown in the next section.
Example of Support Server: autoyast based tftp test
Here we will use autoyast to setup the system of the test job and the os-autoinst autoyast testing infrastructure. For supportserver, this means using proxy to access qemu provided data, for dowloading autoyast profile and tftp verify script:
HDD_1=supportserver.qcow2SUPPORT_SERVER=1SUPPORT_SERVER_ROLES=pxe,qemuproxyWORKER_CLASS=server,qemu_autoyast_tap_64
The actual test 'child' job, will then be defined as :
AUTOYAST=autoyast_opensuse/opensuse_autoyast_tftp.xmlAUTOYAST_VERIFY=autoyast_opensuse/opensuse_autoyast_tftp.shDESKTOP=textmodeINSTALLONLY=1PARALLEL_WITH=supportserver-opensuse-tftp
again assuming the support server’s name being supportserver-opensuse-tftp. Note that the pxe role already contains tftp and dhcp server role, since they are needed for the pxe boot to work.
Example of Support Server: The tftp test defined in the autoyast_opensuse/opensuse_autoyast_tftp.sh file could be something like:
set -e -xecho test >test.txttime tftp #SERVER_URL# -c put test.txt test2.txttime tftp #SERVER_URL# -c get test2.txtdiff -u test.txt test2.txt && echo "AUTOYAST OK"
and the rest is done automatically, using already prepared test modules in tests/autoyast subdirectory.
Using text consoles and the serial terminal
Typically the OS you are testing will boot into a graphical shell e.g. TheGnome desktop environment. This is fine if you wish to test a program with aGUI, but lets say you want to run some shell scripts then it is not soconvenient.
To access a text based console or TTY, you can do something like the
following.
use 5.018;use warnings;use base 'opensusebasetest';use testapi;use utils;sub run { wait_boot; # Utility function defined by the SUSE distribution select_console 'root-console';}1;
This will select a text TTY and login as the root user (you could usebecome_root instead in this case). Had select_console 'root-console' beenused before then it would just select the TTY. Now that we are on a textconsole it is possible to run scripts and observe their output. Note thatroot-console is defined by the distribution, but also that calls toselect_console can have far reaching consequences depending on what consoleis being selected and what backend/architecture the SUT is using.
Running a script: Using the assert_script_run and script_output commands
assert_script_run('cd /proc');my $cpuinfo = script_output('cat cpuinfo');if($cpuinfo =~ m/avx2/) { # Do something which needs avx2}else { # Do some workaround}
Note that it is usually not necessary to return text from the SUT to the testmodule for processing and it is often faster to do the processing in a shellscript on the SUT. However you may find it more convenient, readable orreliable to do it in the Perl test module.
The script_run and script_output commands are high level commands whichuse type_string and wait_serial underneath. Sometimes you may wish to uselower level commands which give you more control, but be warned that it mayalso make your code less portable.
Using a serial terminal
Important | You need a QEMU version >= 2.6.1 and to set theVIRTIO_CONSOLE variable to 1 to use this with the QEMU backend. |
Usually OpenQA controls the system under test using VNC. This allows the useof both graphical and text based consoles. Key presses are sent individuallyas VNC commands and output is returned in the form of screen images and textoutput from the SUT’s serial port.
Sending key presses over VNC is very slow so for tests which send a lot oftext commands it is much faster to use a serial port for both sending andreceiving TTY commands.
select_console('root-virtio-terminal'); # Selects a virtio based serial terminal
Changing input and output to a serial terminal has the side effect of changingwhere wait_serial reads output from. This will cause some distributionspecific utility functions to fail, however they can usually be fixed with theis_serial_terminal API function. To find out more look at theis_serial_terminal POD in testapi.pm.
Another consequence of moving to a serial terminal is that none of the needlebased commands will be available because there is no screen image to matchagainst.
Needle editing
If a new needle is created based on a failed test, the new needlewill not be listed in old tests.
If an existing needle is updated with a new image or differentareas, the old test will display the new needle which might beconfusing
If a needle is deleted, old tests may display an error when viewingthem in the web UI.
403 messages when using scripts
If you come across messages displaying ERROR: 403 - Forbidden, makesure that the correct API key is present in client.conf file.
If you are using a hostname other than localhost, pass --host foo to the script.
If you are using fake authentication method, and the message says also "api key expired"you can simply logout and log in again in the webUI and the expiration will be automaticallyupdated
Mixed production and development environment
There are few things to take into account when running a development version anda packaged version of openqa:
If the setup for the development scenario involves sharing /var/lib/openqa,it would be wise to have a shared group openqa, that will have write and executepermissions over said directory, so that geekotest user and the normal developmentuser can share the environment without problems.
This approach will lead to a problem when the openqa package is updated, since thedirectory permissions will be changed again, nothing a chmod -R g+rwx /var/lib/openqa/
and chgrp -R openqa /var/lib/openqa
can not fix.
Performance impact
openQA workers can cause high I/O load, especially when creating VM snapshots.The impact therefore gets more severe when MAKETESTSNAPSHOTS is enabled.should not impact the stability of openQA jobs but can increase job executiontime. If you run jobs on a machine where responsiveness of other servicesmatter, for example your desktop machine, consider patching theIOSchedulingPriority of a workers service file as described in thesystemddocumentation, for example set IOSchedulingPriority=7 for the lowestpriority. If not available then you can try to execute the worker processeswith ionice to reduce the risk of your system becoming significantlyimpacted by snapshot creation. Loading VM snapshots can also have an impact onSUT behavior as the execution of the first step after loading a snapshot mightbe delayed. This can lead to problems if the executed tests do not foresee anappropriate timeout margin.
DB migration from SQlite to postgreSQL
As a first step to start using postgreSQL, please, configure postgreSQL databaseaccording to thepostgreSQL setup guide
To migrate api keys run following commands:
Export data from the SQlite db:
sqlite3 db.sqlite -csv -separator ',' 'select * from api_keys;' > apikeys.csv
Note: SQlite database file is located in /var/lib/openqa/db
by default.
Import data to the postgreSQL
# openqa is the postgreSQL database name and apikeys.csv is api keys export filepsql -U postgres -d openqa -c "copy api_keys from 'apikeys.csv' with (format csv);"
In case you need to migrate job groups, test suites, use dump_templates andload_templates scripts accordingly.
Important | This overview is valid only when using QEMU backend! |
Which networking type to use is controlled by the NICTYPE
variable. If unset or empty NICTYPE
defaults to user, ie qemuuser networking which requires no further configuration.
For more advanced setups or tests that require multiple jobs to bein the same networking the TAP or VDE based modes can be used.
Qemu user networking
With Qemu user networking each jobs gets it’s own isolated network withTCP and UDP routed to the outside. DHCP is provided by qemu. The MAC address ofthe machine can be controlled with the NICMAC variable. If not set, it’s52:54:00:12:34:56.
TAP based network
os-autoinst can connect qemu to TAP devices of the host system toleverage advanced network setups provided by the host by setting NICTYPE=tap.
The TAP device to use can be configured with the TAPDEV variable. If not setdefined, ist’s automatically set to "tap" + ($worker_instance - 1), i.e.worker1 uses tap0, worker 2 uses tap1 and so on.
For multiple networks per job (see NETWORKS variable), the following numberingscheme is used:
worker1: tap0 tap64 tap128 ...worker2: tap1 tap65 tap129 ...worker3: tap2 tap66 tap130 ......
MAC address of virtual NIC is controlled by NICMAC variable orautomatically computed from $worker_id if not set.
In TAP mode the system administrator is expected to configure thenetwork, required internet access etc on the host manually.
TAP devices need be owned by the _openqa-worker user for openQA tobe able to access them.
tunctl -u _openqa-worker -p -t tap0
If you want to use TAP device which doesn’t exist on the system,you need to set CAP_NET_ADMIN capability on qemu binary file:
zypper in libcap-progssetcap CAP_NET_ADMIN=ep /usr/bin/qemu-system-x86_64
Network setup can be changed after qemu is started using network configure scriptspecified in TAPSCRIPT variable.
Sample script to add TAP device to existing bridge br0:
sudo brctl addif br0 $1sudo ip link set $1 up
TAP with Open vSwitch
The recommended way to configure the network for TAP devices is using Open vSwitch.There is a support service os-autoinst-openvswitch.service which sets vlan numberof Open vSwitch ports based on NICVLAN variable - this separates the groups oftests from each other.
NICVLAN variable is dynamically assigned by OpenQA scheduler.
Compared to VDE setup discussed later, Open vSwitch is more complicated to configure,but provides more robust and scalable network.
Start Open vSwitch and add TAP devices:
# start openvswitch.servicesystemctl start openvswitch.servicesystemctl enable openvswitch.service# create bridgeovs-vsctl add-br br0# add tap devices, use vlan 999 by default, the vlan number is supposed to be changed when the vm startsovs-vsctl add-port br0 tap0 tag=999ovs-vsctl add-port br0 tap1 tag=999ovs-vsctl add-port br0 tap2 tag=999ovs-vsctl add-port br0 tap3 tag=999ovs-vsctl add-port br0 tap4 tag=999
The virtual machines need access to os-autoinst webserver acessiblevia IP 10.0.2.2. The IP addresses of VMs are controlled by testsand are likely to conflict if more independent tests runs in parallel.
The VMs have unique MAC that differs in the last 16 bits (see /usr/lib/os-autoinst/backend/qemu.pm).
os-autoinst-openvswitch.service
sets up filtering rules for the following translation scheme whichprovide non-conflicting addresses visible from host:
MAC 52:54:00:12:XX:YY -> IP 10.1.XX.YY
That means that the local port of the bridge must be configured to IP 10.0.2.2and netmask /15 that covers 10.0.0.0 and 10.1.0.0 ranges.
ip addr add 10.0.2.2/15 dev br0ip route add 10.0.0.0/15 dev br0ip link set br0 up
Debugging Open vSwitch configuration
Boot sequence with wicked < 0.6.23:
wicked - creates tap devices
openvswitch - creates the bridge br0, adds tap devices to it
wicked handles br0 as hotplugged device, assignd the IP 10.0.2.2 to it, updates SuSEFirewall
os-autoinst-openvswitch - installs openflow rules, handles vlan assignment
Boot sequence with wicked 0.6.23 and newer:
openvswitch
wicked - creates the bridge br0 and tap devices, add tap devices to the bridge,
SuSEFirewall
os-autoinst-openvswitch - installs openflow rules, handles vlan assignment
The configuration and operation can be checked by the following commands:
ovs-vsctl show # shows the bridge br0, the tap devices are assigned to itovs-ofctl dump-flows br0 # shows the rules installed by os-autoinst-openvswitch in table=0
packets from tapX to br0 create additional rules in table=1
packets from br0 to tapX increase packet counts in table=1
empty output indicates a problem with os-autoinst-openvswitch service
zero packet count or missing rules in table=1 indicate problem with tap devices
ipables -L -v
As long as the SUT has access to external network, there should benonzero packet count in the forward chain between br0 and externalinterface.
VDE Based Network
Virtual Distributed Ethernet provides a software switch that runs inuser space. It allows to connect several qemu instances withoutaffecting the system’s network configuration.
The openQA workers need a vde_switch instance running. The workersreconfigure the switch as needed by the job.
Basic, single machine tests
To start with a basic configuration like qemu user mode networking,create a machine with the following settings:
VDE_SOCKETDIR=/run/openqa
NICTYPE=vde
NICVLAN=0
Start switch and user mode networking:
systemctl start openqa-vde_switchsystemctl start openqa-slirpvde
With this setting all jobs on the same host would be in the samenetwork share the same SLIRP instance though.
Multi machine tests
Create a machine like above but don’t set NICVLAN. openQA willdynamically allocate a VLAN number for all jobs that havedependencies between each other. By default this VLAN is private andhas no internet access. To enable user mode networking setVDE_USE_SLIRP=1
on one of the machines. The worker running the jobon such a machine will start slirpvde and put it in the correct VLANthen.
Worker configuration
Requirements
zypper in openvswitch os-autoinst-openvswitch openQA-worker tunctlsystemctl enable SuSEfirewall2 # Needed to create NAT to outside networksystemctl enable openvswitch # Needed for network creationsystemctl enable os-autoinst-openvswitch # Needed to separate networks for parallel clusters
Note | In some cases (e.g. on Leap) can be needed to start the OpenvSwitch service before the Network service by modifying the OpenvSwitch service. For reference see this. |
The os-autoinst-openvswitch.service uses br0 by default.Usually it’s used by KVM, so we need to configure br1.
# /etc/sysconfig/os-autoinst-openvswitchOS_AUTOINST_USE_BRIDGE=br1
For every MM worker you need tap device (tap0 tap1 tap2 ..)
# /etc/sysconfig/network/ifcfg-tap0BOOTPROTO='none'IPADDR=''NETMASK=''PREFIXLEN=''STARTMODE='auto'TUNNEL='tap'TUNNEL_SET_GROUP='nogroup'TUNNEL_SET_OWNER='_openqa-worker'
Add all tap devices to bridge config
# /etc/sysconfig/network/ifcfg-br1BOOTPROTO='static'IPADDR='10.0.2.2/15'STARTMODE='auto'OVS_BRIDGE='yes'OVS_BRIDGE_PORT_DEVICE_1='tap0'OVS_BRIDGE_PORT_DEVICE_2='tap1'OVS_BRIDGE_PORT_DEVICE_3='tap2'
The IP 10.0.2.2 can also serve as a gateway to access outsidenetwork. For this, a NAT between br1 and eth0 must be configuredwith SuSEfirewall or iptables.
# /etc/sysconfig/SuSEfirewall2FW_ROUTE="yes"FW_MASQUERADE="yes"FW_DEV_INT="br1"
Tell workers to run also multi-machine jobs
# /etc/openqa/workers.ini[global]WORKER_CLASS = qemu_x86_64,tap
REBOOT
GRE tunnels
By default all multi-machine workers have to be on single physical machine.You can join multiple physical machines and its ovs bridges together by GRE tunnel.
If the workers with TAP capability are spread across multiple hosts, the network must be connected.See Open vSwitch documentation for details.
Create gre_tunnel_preup script (change remote_ip value correspondingly on both hosts)
# /etc/wicked/scripts/gre_tunnel_preup.sh#!/bin/shaction="$1"bridge="$2"ovs-vsctl --may-exist add-port $bridge gre1 -- set interface gre1 type=gre options:remote_ip=<IP address of other host>
And call it by PRE_UP_SCRIPT="wicked:gre_tunnel_preup.sh" entry
# /etc/sysconfig/network/ifcfg-br1<..>PRE_UP_SCRIPT="wicked:gre_tunnel_preup.sh"
Allow GRE in firewall
# /etc/sysconfig/SuSEfirewall2FW_SERVICES_EXT_IP="GRE"FW_SERVICES_EXT_TCP="1723"
Note | When using GRE tunnels keep in mind that VMs inside the ovs bridges have to use MTU=1458 for their physical interfaces (eth0, eth1). If you are using support_server/setup.pm the MTU will be set automatically to that value on support_server itself and it does MTU advertisem*nt for DHCP clients as well. |
Introduction
openQA is an automated test tool that makes it possible to test the wholeinstallation process of an operating system. It’s free software releasedunder the GPLv2 license. Thesource code and documentation are hosted in theos-autoinst organization on GitHub.
This document provides the information needed to start contributing to theopenQA development improving the tool, fixing bugs and implementing newfeatures. For information about writing or improving openQA tests, refer to theTests Developer Guide. In both documents it’s assumed that the reader is alreadyfamiliar with openQA and has already read the Starter Guide. All those documentsare available at theofficial repository.
Development guidelines
As mentioned, the central point of development is theos-autoinst organization on GitHub where severalrepositories can be found:
openQA containing documentation,server, worker and other support scripts.
os-autoinst with the standalonetest tool.
os-autoinst-distri-opensusecontaining the tests used in http://openqa.opensuse.org
os-autoinst-needles-opensusewith the needles associated to the tests in the former repository.
os-autoinst-distri-examplewith an almost empty set of tests meant to be used to start writing tests (andcreating the corresponding needles) from scratch for a new operating system.
As in most projects hosted on GitHub, pull request are always welcome andare the right way to contribute improvements and fixes.
Rules for commits
Every commit is checked by Travis CI as soon asyou create a pull request but you should run the tidy script locally,i.e. before every commit call:
./script/tidy
to ensure your Perl code changes are consistent with the style rules.
You may also run local tests on your machine or in your own developmentenvironment to verify everything works as expected. Call:
make test
for unit and integration tests.
To execute a single test, one can use prove. You must set TEST_PG so the databasecan be found. If you set a custom base directory, be sure to unset it when running tests.Example:
TEST_PG='DBI:Pg:dbname=openqa_test;host=/dev/shm/tpg' OPENQA_BASEDIR= prove -v t/14-grutasks.t
To speed up the test initialization, start PostgreSQL using t/test_postgresqlinstead of using the system service. Eg.
t/test_postgresql /dev/shm/tpg
For git commit messages use the rules stated onHow to Write a Git Commit Message asa reference
Every pull request is reviewed in a peer review to give feedback on possibleimplications and how we can help each other to improve
If this is too much hassle for you feel free to provide incomplete pullrequests for consideration or create an issue with a code change proposal.
Getting involved into development
But developers willing to get really involved into the development of openQA orpeople interested in following the always-changing roadmap should take a lookat the openQAv3 project inopenSUSE’s project management tool. This Redmine instance is used to coordinatethe main development effort organizing the existing issues (bugs and desiredfeatures) into 'target versions'.
Currently developers meet in IRC channel#opensuse-factory and in a dailyjangouts call of the core developer team.
In addition to the ones representing development sprints, two other versions arealways open. Easy hacks lists issuesthat are not specially urgent and that are considered to be easy to implementby newcomers. Developers looking for a place to start contributingare encouraged to simply go to that list and assign any open issue to themselves.Future improvements groups featuresthat are in the developers' and users' wish list but that have little chances to beaddressed in the short term, either because the return of investment is notworth it or because they are out of the current scope of the development.
openQA and os-autoinst repositories also include test suites aimed at preventingbugs and regressions in the software. codecov isconfigured in the repositories to encourage contributors to raise the testscoverage with every commit and pull request. New features and bug fixes areexpected to be backed with the corresponding tests.
Technologies
Everything in openQA, from os-autoinst to the web frontend and from the teststo the support scripts is written in Perl. So having some basic knowledgeabout that language is really desirable in order to understand and developopenQA. Of course, in addition to bare Perl, several libraries and additionaltools are required. The easiest way to install all needed dependencies isusing the available os-autoinst and openQA packages, as described in theInstallation Guide.
In the case of os-autoinst, only a few CPAN modules arerequired. Basically Carp::Always, Data::Dump. JSON and YAML. On the otherhand, several external tools are needed includingQEMU,Tesseract andOptiPNG. Last but not least, theOpenCV library is the core of the openQA image matchingmechanism, so it must be available on the system.
The openQA package is built on top of Mojolicious, an excellent Perl frameworkfor web development that will be extremely familiar to developers coming fromother modern web frameworks like Sinatra and that have nice and comprehensivedocumentation available at its home page.
In addition to Mojolicious and its dependencies, several other CPAN modules arerequired by the openQA package. For a full list of hard dependencies, see thefile cpanfile at the root of the openQA repository.
openQA relies on PostgreSQL to store the information. It used to support SQLite,but that is no longer possible.
As stated in the previous section, every feature implemented in both packagesshould be backed by proper tests.Test::More is used to implement thosetests. As usual, tests are located under the /t/ directory. In the openQApackage, one of the tests consists of a call toPerltidy to ensure that the contributed codefollows the most common Perl style conventions.
Starting the webserver from local Git checkout
To start the webserver for development, use the scripts/openqa daemon.
openQA will pull the required asssets on the first run.
openQA uses SASS, so Ruby development files are required. Under openSUSE,installing the packages devel_C_C++ and ruby-devel should be sufficient.openQA will install the required files automatically under .gem. Add.gem/ruby/2.4.0/bin to the PATH variable to let it find the sass/scssbinaries. I also had to create symlinks of those binaries without .ruby2.4suffix so openQA could find them.
It is also useful to start openQA with morbo which allows applying changeswithout restarting the server:morbo -m development -w assets -w lib -w templates -l http://localhost:9526 script/openqa daemon
Managing the database
During the development process there are cases in which the database schemaneeds to be changed.there are some steps that have to be followed so that new database instancesand upgrades include those changes.
When is it required to update the database schema?
After modifying files in lib/OpenQA/Schema/Result. However, not all changesrequire to update the schema. Adding just another method or altering/addingfunctions like has_many doesn’t require an update. However, adding newcolumns, modifying or removing existing ones requires to follow the stepsmentioned above.
How to update the database schema
First, you need to increase the database version number in the
$VERSION
variable in the lib/OpenQA/Schema.pm file.Note that it’s recommended to notify the other developers before doing so,to synchronize in case there are more developers wanting to increase theversion number at the same time.Then you need to generate the deployment files for new installations,this is done by running ./script/initdb --prepare_init.
Afterwards you need to generate the deployment files for existing installations,this is done by running ./script/upgradedb --prepare_upgrade.After doing so, the directories dbicdh/$ENGINE/deploy/<new version> anddbicdh/$ENGINE/upgrade/<prev version>-<new version> for PosgreSQLshould have been created with some SQL files inside containing the statements toinitialize the schema and to upgrade from one versionto the next in the corresponding database engine.
Migration scripts to upgrade from previous versions can be added underdbicdh/_common/upgrade. Create a <prev_version>-<new_version> directory andput some files there with DBIx commands for the migration. For examples justhave a look at the migrations which are already there.
The above steps are only for preparing the required SQL statements, but do notactually alter the database. Before doing so, it is recommended to backup yourdatabase to be able to downgrade again if something goes wrong or you just needto continue working on another branch. To do so, the following command can beused to create a copy:
createdb -O ownername -T originaldb newdb
To actually create or update the database (after creating a backup as described),you should run either ./script/initdb --init_database or./script/upgradedb --upgrade_database. This is also required when the changesare installed in a production server.
How to add fixtures to the database
Note: This section is not about the fixtures for the testsuite. Those are locatedunder t/fixtures.
Note: This section might not be relevant anymore. At least there are currentlynone of the mentioned directories with files containing SQL statements present.
Fixtures (initial data stored in tables at installation time) are storedin files into the dbicdh/_common/deploy/_any/<version> anddbicdh/_common/upgrade/<prev_version>-<next_version> directories.
You can create as many files as you want in each directory. These files containSQL statements that will be executed when initializing or upgrading a database.Note that those files (and directories) have to be created manually.
Executed SQL statements can be traced by setting the DBIC_TRACE environmentvariable.
export DBIC_TRACE=1
How to overwrite config files
It can be necessary during development to change the config files in etc/.For example you have to edit etc/openqa/database.ini to use another database.Or to increase the log level it’s useful to set the loglevel to debug inetc/openqa/openqa.ini.
To avoid these changes getting in your git workflow, copy them to a newdirectory and set OPENQA_CONFIG in your shell setup files.
cp -ar etc/openqa etc/mineexport OPENQA_CONFIG=$PWD/etc/mine
Note that OPENQA_CONFIG points to the directory containing openqa.ini, database.ini,client.conf and workers.ini.
How to setup PostgreSQL to test locally with production data
Install PosgreSQL - under openSUSE the following package are required:postgresql-server postgresql-init
Start the server: systemctl start postgresql
The following steps need to be done by the user postgres: su - postgres
Create user: createuser your_username where your_username must be the sameas the UNIX user you start your local openQA instance with.
Create database: createdb -O your_username openqa
The next steps must be done by the user you start your local openQA instance with.
Import dump: pg_restore -c -d openqa path/to/dump
Configure openQA to use PostgreSQL as described in the section Database of the installation guide.User name and password are not required.
Adding new authentication module
OpenQA comes with three authentication modules providing authentication methods:OpenID, iChain and Fake (see User authentication).
All authentication modules reside in lib/OpenQA/Auth directory. DuringOpenQA start, [auth]/method section of /etc/openqa/openqa.ini is read and accordingto its value (or default OpenID) OpenQA tries to require OpenQA::WebAPI::Auth::$method.If successful, module for given method is imported or the OpenQA ends with error.
Each authentication module is expected to export auth_login and auth_logout functions. In case of request-response mechanism (as inOpenID), auth_response is imported on demand.
Currently there is no login page because all implemented methods use either 3rd partypage or none.
Authentication module is expected to return HASH:
%res = ( # error = 1 signals auth error error => 0|1 # where to redirect the user redirect => '');
Authentication module is expected to create or update user entry in OpenQA databaseafter user validation. See included modules for inspiration.
Customize base directory
It is possible to customize the openQA base directory by setting the environmentvariable OPENQA_BASEDIR. The default value is /var/lib.
Running tests of openQA itself
To execute the testsuite locally, use make test. It is also possible to run aparticular test for example prove t/api/01-workers.t.
To run UI tests the package perl-Selenium-Remote-Driver is required. Note thatthe version provided by Leap 42.2 is too old. The version from the repositorydevel-languages-perl can be used instead.
You need to install chromedriver and either chrome or chromium for the ui tests.
You can alter the appearance of the openQA web UI to some extent throughthe 'branding' mechanism. The 'branding' configuration setting in the'global' section of /etc/openqa/openqa.ini specifies the branding touse. It defaults to 'openSUSE', and openQA also includes the 'plain'branding, which is - as its name suggests - plain and generic.
To create your own branding for openQA, you can create a subdirectoryof /usr/share/openqa/templates/branding (or wherever openQA isinstalled). The subdirectory’s name will be the name of your branding.You can copy the files from branding/openSUSE or branding/plain touse as starting points, and adjust as necessary.
Web UI template
openQA uses the Mojolicious framework’s templatingsystem; the branding files are included into the openQA templates atvarious points. To see where each branding file is actually included,you can search through the files in the templates tree for the textinclude_branding. Anywhere that helper is called, the branding filewith the matching name is being included.
The branding files themselves are Mojolicious 'Embedded Perl' templates justlike the main template files. You can read the Mojolicious Documentation for help with theformat.