v1.7.0 - ADQ Release Notes
Status: GA
ADQ v1.7.0 sets the foundation for even more powerful features to come, focusing on speed, scalability, and seamless user experience.
IMPORTANT: Please review the Known Limitations section as some features of the platform have been deprecated due to updates made in ADQ v1.6.0 and ADQ v1.7.0.
1 New Features
1.1 Performance improvements via distributed architecture
We’ve introduced a powerful new distributed architecture option for deploying the ADQ platform that is designed to boost performance, improve scalability, and enhance the overall user experience.
The Data Quality Manager (DQM), the engine that powers rule processing in ADQ, now supports a controller/worker architecture. This means rule executions, workflow tests, and input data extractions can be handled concurrently across multiple worker nodes, reducing bottlenecks and system load.
Meanwhile, the Data Quality Clinic (DQC), which stores rule results for remediation, now supports parallel data set imports, allowing users to work with exception records more quickly and efficiently.
How this benefits users:
Faster rule development experience: Run multiple test workflows and data extractions at once without slowing down the platform.
Improved performance in production: Scheduled rule execution groups are now distributed across worker nodes, significantly reducing processing times.
Enhanced scalability: Easily support growing data volumes and more complex rule sets without compromising performance.
Smoother remediation workflows: Quickly load and view exception data, even from large data sets.
This update ensures that both development and production environments are faster, more stable, and better equipped to scale with the data quality needs of the business.
1.2 User management and data access control area in ADQ
A new Access Control area has been introduced, providing a centralised location for managing user access, roles, and data permissions across the platform.
This dedicated section allows configuration of users, user groups, roles, and data source access through intuitive wizards. It consolidates access control into one place, removing the need to manage roles separately in DQM or DQC.
Permissions have been enhanced to support more granular access control, making it possible to align user permissions more precisely with their responsibilities. For example, permissions can now be set to allow viewing of data source connections without edit rights, or to enable modifications to data source views only.
In addition, nested user groups are now supported. Each group can contain other groups or belong to one, reflecting complex organisational structures. A tree grid view presents a clear visualisation of user groups and their relationships, making it easier to understand and manage access at scale.
Key highlights include:
Brand new access control area within the ADQ UI
Centralised configuration of users, roles, and data access
Granular permissions aligned with platform actions
Nested user groups with a visual tree structure for clarity
This addition significantly enhances access governance while improving efficiency and scalability in managing user roles within ADQ.
1.3 Optimised data extraction for profiling
The data extraction step when running a profiling configuration has been refactored and optimised to improve performance.
1.4 Rules in multiple execution groups
DQ rules can belong to more than one rule execution group offering more flexibility when scheduling groups of rules to run.
1.5 Support for encrypted properties in connection strings
Encrypted properties, such as private key files passwords, can be specified in JDBC URLs for database data source connections.
1.6 Last execution tables in the DQ results database contain all rules
The following database tables will now contain the latest execution of each rule even if the rules are across execution groups, or if a workflow test was ran. Previously, these tables contained rules from either the latest rule execution group run or the latest rule workflow test.
ssdq.aggregated_results_last_execution_horizontalssdq.aggregated_results_last_execution_vertical
1.7 Rules with DQ score of 100% displayed in DQ Metrics
Rules which produced no data issues/breaks, are displayed in DQ Metrics in the Insights Hub the with a DQ score = 100% for that rule in the table visual. Previously, only rules which had failing records were shown.
1.8 UI/UX enhancements
Filters added to the Data Quality Issues by Rule page such as Data Issues Assignee.
Break field and primary key icons moved to the beginning of the column header name in Data Issues.
Data source view filter added to DQ Metrics.
In My Rules library, users can display the data source(s) and view(s) used by the rule by selecting these columns for display.
1.9 Accessibility improvements
An inactivity warning message is displayed to the user allowing them to stop automatic logout.
When cancelling out of the rule view/edit/create wizard, the user’s position in the library is retained, i.e. the user is not brought to the top of the library of rules.
2.0 ADQ log file housekeeping
A housekeeping solution will run each day to clear out ADQ server log files. There is a new configuration file housekeeping.properties where the user can set the time at which the housekeeping executes daily and specify the number of days to retain the log files for. These properties are adq-server.housekeeping.execution-time and adq-server.housekeeping.log-file.retain-days respectively.
2.1 Reusable rule library updates
The rule metadata, e.g. the dimension column, has been reviewed and refined for each reusable rule.
The Rule Categories column is now displayed by default in the Reusable Rules Library as opposed to the Rule Type column as the Rule Categories column contains more useful metadata about the rule.
2 Defects Fixed
2.1 Profiling configurations which were not enabled had empty column name added
If a profiling configuration was created without one of the options enabled e.g. rule suggestion and outlier detection enabled and data profiling disabled, when the user returned to edit the profiling configuration, the column name list for the disabled profiling option had an empty column name in it. This then caused failures if the user chose to enable the previously disabled profiling option and run the profiling job.
2.2 Rule result column named ‘status’ or ‘assignee’ caused errors in Data Issues
In Data Issues, a duplicate key error message was produced on the Data Quality Issues by Record screen when a rule contained a column called status and/or assignee in the rule result file. These columns have been renamed to avoid a conflict with user-defined columns in the rule result named status or assignee.
2.3 Search attributes page displayed when cloning reusable rule
When cloning a reusable rule, the Search Attributes page was displayed as part of the rule clone wizard. This page should be hidden from the user as any configuration entered by the user into the Search field(s) will not be honoured by the rule. This wizard page will no longer be displayed when cloning a reusable rule.
2.4 Only invalid rule output files displayed on rule executions
If only one of the rule’s output files (rule result or rule count) file were invalid, both output files were returned to the user. This was misleading as it suggests there was an issue with both files. Now, only the invalid output files are returned to the user in the Rule Executions area.
2.5 Rule execution groups in incorrect UI group prevent start-up of ADQ
If rule execution group DQM solutions in the SSDQ client resided in a UI group other than ssdq-ui/rule-execution-groups, the ADQ service failed to start. Now, ADQ will ignore any solutions prefixed with rule-execution-group- which are not in the ssdq-ui/rule-execution-groups UI group. This means the execution groups corresponding to these solutions will not appear in the Rule Execution Groups area. If this is undesirable then the solutions must be moved to the UI group ssdq-ui/rule-execution-groups.
2.6 SSDQ upgrade updated ordinal column in data-source-view-columns table
When upgrading from ADQ v1.4.2 to later versions, the SSDQ automated upgrade process was incorrectly updating the ORDINAL column in the data-source-view-columns custom table. Now, the ORDINAL column will correctly begin at 1 for each view.
2.7 Spaces and hyphens in source data column headers caused upload of data issues to fail
If spaces and hyphens were present in source data column headers, the upload of rule results to Data Issues failed as the underlying database (DQC) does not support spaces and hyphens in column header names. ADQ now cleanses the spaces and hyphens in these column header names to underscores allowing the upload of rule results to succeed.
3 Known Defects
3.1 Database data source view exports convert decimals past a certain length to scientific notation
When retrieving the rule input from a database data source view created using the SQL builder or the SQL query functionality, if the data column type is a float where the decimal numbers are past a certain length (greater than 10,000,000 or less than 0.001) the JDBC connector exporting the data converts this decimal number into scientific notation.
Workaround
Use a SQL query rule to perform the check rather than a custom FlowDesigner project rule. SQL query rules perform the rule logic directly on the source database so do not need a data source view defined. The SQL query should include a CAST in the query to ensure the data is not output in scientific notation, e.g. CAST(account_balance AS DECIMAL(20, 2)). Note you will need to know the number of digits after the decimal point.
3.2 Scheduled rule group execution fails when no rules are enabled
If there is a rule execution group which is scheduled to run but there are no enabled rules in the group, the execution will fail.
Workaround
None.
4 Known Limitations
4.1 Unable to cancel rule executions via ADQ on distributed architecture
It will not be possible to cancel running rule group executions via ADQ when the platform is deployed using a distributed architecture, i.e. having worker DQMs setup. It will be possible to do this via the DQM however by the following:
Login to the SSDQ client on the master DQM.
Select the Executions tab and locate the execution of the launcher solution for the rule execution you wish to cancel. For execution groups, the solution will be
rule-execution-group-EXECUTION_GROUP_NAME, for workflow tests, it will bemetrics-controller-rule-testand for input data extractions, it will bedate-for-rule.Select the execution and locate the
remote-task-performeritem.Select this item and navigate to its Arguments tab.
Check the “Arguments with Properties Expanded” box and locate the URL argument. This will contain the URL for the worker DQM which the execution is running on.
Login to the SSDQ client on the worker DQM
In Executions, cancel the solution execution. For execution groups, the solution will be
metrics-controller-executor, for workflow tests, it will bemetrics-controller-rule-test-executorand for input data extractions, it will bedata-for-rule-executor.
4.2 Automated upgrade limitations for updating user roles and permissions
For the new User Management area in ADQ, the automated upgrade will not fully upgrade all user management and access control entities as it is impossible to do a 1:1 mapping of all permissions and roles as they existed in previous releases to the formats required for ADQ.
Please review the manual steps outlined in https://jiraatdatactics.atlassian.net/wiki/spaces/DS/pages/1523417143/v1.7.0+-+ADQ+Deployment+Guide#6.1.11-User-Management-Upgrade to ensure all users, roles, permissions and data access is configured correctly following the upgrade.
4.3 Optimised DQ Workflow Limitations from ADQ v1.6.0
Record level auto-assignment
The workflow will not automatically assign data issues/broken records to users at the row level. This means if the user populates the data_owner field in the FlowDesigner rule project or SQL rule with the username or user group name of the individual or team who is responsible for remediating these records, the workflow will not assign the data issues to these users. The records will be automatically assigned to the user or user group specified as the Data Issues Assignee for that DQ rule.
To have the data issues records assigned to different users or groups, the user will need to do this manually or via a bulk assignment script in Data Issues.
The auto-assignment of records to the Default Assignee only occurs for new records detected by the DQ rule.
For example, if Record_1 and Record_2 were detected by the DQ rule Rule_1 on Monday at 09:00 where the data_owner was set to User_1 and User_2 respectively for each record, the DQ workflow would automatically assign these records to Rule_1’s default Data Issues Assignee, User_3. This means in Data Issues Record_1 and Record_2 will both be assigned to User_3.
Then in Data Issues User_4 assigns Record_1 to User_1 and Record_2 to User_2. If Record_1 and Record_2 are detected as data issues by DQ Rule_1 again by the next scheduled run of the execution group at Tuesday at 09:00, and Record_3 has been detected as a new rule break. Record_1 and Record_2 will remain assigned to User_1 and User_2 respectively, and Record_3 will be assigned to User_3.
Specifying the winning record in the FlowDesigner rule project
When building DQ rule logic in FlowDesigner users were able to specify which record in the rule result file was the winning record. One purpose of this feature was for matching or deduplication rules where the user may have wanted to suggest which of the candidate match records was the “best” record or match. The best match would then be the winning record in Data Issues to assist the Data Steward during remediation. Another was for data enrichment, where users could extend their FlowDesigner rule logic to cleanse and transform the records which failed the DQ check so they conform to the expected format or value. The corrected or enriched records would be the winning records in Data Issues where the Data Steward can download the already remediated records and make the relevant updates in the source system.
join-and-filter and custom-solutions custom tables
The join-and-filter and custom-solutions custom tables have been superseded by composite data source views meaning any data transformations defined here must be migrated to composite data source view FlowDesigner projects. Alternatively, if all input data for the join-and-filter and custom-solutions transformations is extracted from the one database, they can be migrated to SQL query views.
Data source change affecting allowlist records
There existed a property ssdq.data-source-change-affect-allowlist in the ssdq-settings property set which dictated whether records should be unallowlisted, i.e. moved back to the main dataset/list of records for remediation, if the source data changed for the offending record/DQ break.
In ADQ v1.6.0, allowlisted records are only unallowlisted if the allowlist expiry date has been met or if the user manually updates the Remediation Category of the record to any value except ALLLOWLIST.
Record type column in Data Issues/DQC
The Record Type column in Data Issues/DQC has now been removed. This column held one of the following values: OLD, OLD-CHANGED or NEW. It informed the user if the record was an old broken record, i.e. identified as a DQ break by a previous execution of the rule, an old broken record where the source data has changed since the previous execution of the rule, or a new broken record, i.e. a record which was not identified as a DQ break by the previous execution of the rule.
Users can now obtain break age metrics from Advanced Insights which informs the user of how many days it took to remediate the broken record (break age) and how times that record failed the DQ check following remediation (reoccurrence time).
AWS and Azure Data Extraction
AWS s3 bucket and Azure data lake data extractions are no longer supported. Any data extractions defined in the custom tables aws-files-extract and azure-ADLS2-files-extractwill not be executed meaning any rules using these sources will not run.
DQ results database tables aggregated_results_last_execution_horizontal and data_scoring_rules_last_execution_horizontal not populated
The database table data_scoring_rules_last_execution_horizontal in the DQ results database is no longer populated.
4.4 Rule Dictionary has a maximum limit of 45 columns
A rule dictionary of greater than 45 columns can be uploaded to DQM, but the workflow will fail if it exceeds 45 columns.
4.5 Rule Results files are limited to 30 columns
The rule is able to export more than 30 columns, but the workflow will fail if the results exceed 30 columns. This includes the row id, rule id and data owner columns. The rule input does not have a limit of columns.
4.6 FlowDesigner string length limitation
FlowDesigner has a maximum capacity of 32,000 bytes per string length for each row.
4.7 Disallowed rule result file column names
The following columns (case insensitive) are not permitted to be included in the rule result file produced by the rule as these column names are reserved for components of the DQ workflow:
autonumberstatetable
4.8 Rule result cannot have more than six primary keys
Rule result file can have up to six columns as a compound key for it to be successfully processed.
4.9 Solidatus integration only updates rules using SQL builder views without quoted identifiers
The Solidatus integration solution can currently only update rules whose input is from a SQL builder view and where this view is extracted without quoted identifiers, i.e. the tick box on the View Creation wizard ‘Do use quoted identifiers during extraction' is unticked.
5 Notices
5.1 Deprecating DQC Collector
The DQC collector can be uninstalled following a successful upgrade to ADQ v1.6.0. Before uninstalling the collector, any previous versions of the SSDQ client (older than version 2025.07.03) must be archived and deleted from the DQM SYSTEM client. Alternatively, all old SSDQ clients must have the DQC collector trigger disabled from solution dqc-results-scanner.
5.2 Migrating SQL Join & Filters to Composite Views
As mentioned above, any data extractions defined in the join-and-filter and custom-solutions custom tables must be migrated to composite data source views or SQL query views.
5.3 Collections created outside of DQ workflow
The collections per rule to house the DQ breaks in the Data Issues database (DQC) get created outside of the DQ workflow.
For SQL rules, the collection is created when the SQL rule is created.
For reusable rules, the collection is created when the reusable rule is added to My Rules library.
For custom rules, the collection is created when the user uploads the FlowDesigner rule project via ADQ. However, if the FlowDesigner rule project is deployed directly from FlowDesigner to DQM (deploy to server option) then the rule’s collection will need manually created. To do this, click on Create Collection in the action options for the rule.
Collections can also be created in bulk by selecting the relevant rules in My Rules library, selecting the Bulk action button in the bottom right hand corner and choosing Create Collections.
5.4 PingFederate and Kerberos Authentication Behaviour
Scenario Overview
In environments making use of ADQ with PingFederate and Kerberos for authentication, users may encounter a specific behaviour:
Regardless of the username entered, a user can log in to ADQ if their machine user is a member of the appropriate groups. This means that authentication is based on the machine user rather than the individual’s entered username.
Technical Details
PingFederate Token Handling: This behaviour is partly due to how PingFederate processes and honours authentication tokens.
Auth Model Configuration: Our current authentication model requests a login for each session. However, PingFederate’s token handling mechanism bypasses this by validating the machine user’s group membership.
Implications for Users
Group Membership: Ensure that only authorized machine users are members of the relevant groups, as their membership will govern access to ADQ.
Username Entry: The entered username during login will not affect the authentication outcome if the machine user is appropriately grouped.
Recommendations
Review and manage group memberships to maintain secure access controls.
Inform users about this behaviour to set correct expectations during the login process.
Can't find what you're looking for? Get in touch with our support team at https://jiraatdatactics.atlassian.net/servicedesk/customer/portals for help!