Sunday, October 15, 2023
HomeBig DataGetting began with AWS Glue Knowledge High quality for ETL Pipelines

Getting began with AWS Glue Knowledge High quality for ETL Pipelines


In the present day, lots of of 1000’s of shoppers use information lakes for analytics and machine studying. Nonetheless, information engineers must cleanse and put together this information earlier than it may be used. The underlying information needs to be correct and up to date for buyer to make assured enterprise selections. In any other case, information shoppers lose belief within the information and make suboptimal or incorrect selections. It’s a frequent process for information engineers to judge whether or not the info is correct and up to date or not. In the present day there are numerous information high quality instruments. Nonetheless, frequent information high quality instruments often require guide processes to observe information high quality.

AWS Glue Knowledge High quality is a preview characteristic of AWS Glue that measures and screens the info high quality of Amazon Easy Storage Service (Amazon S3) information lakes and in AWS Glue extract, remodel, and cargo (ETL) jobs. That is an open preview characteristic so it’s already enabled in your account within the obtainable Areas. You’ll be able to simply outline and measure the info high quality checks in AWS Glue Studio console with out writing codes. It simplifies your expertise of managing information high quality.

This put up is Half 2 of a four-post sequence to clarify how AWS Glue Knowledge High quality works. Take a look at the earlier put up on this sequence:

On this put up, we present how you can create an AWS Glue job that measures and screens the info high quality of a knowledge pipeline. We additionally present how you can take motion based mostly on the info high quality outcomes.

Resolution overview

Let’s think about an instance use case by which a knowledge engineer must construct a knowledge pipeline to ingest the info from a uncooked zone to a curated zone in a knowledge lake. As a knowledge engineer, one in every of your key duties—together with extracting, reworking, and loading information—is validating the standard of information. Figuring out information high quality points upfront helps you stop inserting dangerous information within the curated zone and keep away from arduous information corruption incidents.

On this put up, you’ll discover ways to simply arrange built-in and customized information validation checks in your AWS Glue job to stop dangerous information from corrupting the downstream high-quality information.

The dataset used for this put up is synthetically generated; the next screenshot reveals an instance of the info.

Arrange sources with AWS CloudFormation

This put up consists of an AWS CloudFormation template for a fast setup. You’ll be able to evaluate and customise it to fit your wants.

The CloudFormation template generates the next sources:

  • An Amazon Easy Storage Service (Amazon S3) bucket (gluedataqualitystudio-*).
  • The next prefixes and objects within the S3 bucket:
    • datalake/uncooked/buyer/buyer.csv
    • datalake/curated/buyer/
    • scripts/
    • sparkHistoryLogs/
    • non permanent/
  • AWS Identification and Entry Administration (IAM) customers, roles, and insurance policies. The IAM function (GlueDataQualityStudio-*) has permission to learn and write from the S3 bucket.
  • AWS Lambda capabilities and IAM insurance policies required by these capabilities to create and delete this stack.

To create your sources, full the next steps:

  1. Sign up to the AWS CloudFormation console within the us-east-1 Area.
  2. Select Launch Stack:

  3. Choose I acknowledge that AWS CloudFormation may create IAM sources.
  4. Select Create stack and watch for the stack creation step to finish.

Implement the answer

To start out configuring your answer, full the next steps:

  1. On the AWS Glue Studio console, select Jobs within the navigation pane.
  2. Choose Visible with a clean canvas and select Create.
  3. Select the Job Particulars tab to configure the job.
  4. For Title, enter GlueDataQualityStudio.
  5. For IAM Position, select the function beginning with GlueDataQualityStudio-*.
  6. For Glue model, select Glue 3.0.
  7. For Job bookmark, select Disable. This lets you run this job a number of instances with the identical enter dataset.
  8. For Variety of retries, enter 0.
  9. Within the Superior properties part, present the S3 bucket created by the CloudFormation template (beginning with gluedataqualitystudio-*).
  10. Select Save.
  11. After the job is saved, select the Visible tab and on the Supply menu, select Amazon S3.
  12. On the Knowledge supply properties – S3 tab, for S3 supply sort, choose S3 location.
  13. Select Browse S3 and navigate to prefix /datalake/uncooked/buyer/ within the S3 bucket beginning with gluedataqualitystudio-* .
  14. Select Infer schema.
  15. On the Motion menu, select Consider Knowledge High quality.
  16. Select the Consider Knowledge High quality node.

    On the Remodel tab, now you can begin constructing information high quality guidelines. The primary rule you create is to examine if Customer_ID is exclusive and never null utilizing the isPrimaryKey rule.
  17. On the Rule sorts tab of the DQDL rule builder, seek for isprimarykey and select the plus signal.
  18. On the Schema tab of the DQDL rule builder, select the plus signal subsequent to Customer_ID.
  19. Within the rule editor, delete id.

    The subsequent rule we add checks that the First_Name column worth is current for all of the rows.
  20. You can even enter the info high quality guidelines straight within the rule editor. Add a comma (,) and enter IsComplete "First_Name", after the primary rule.

    Subsequent, you add a customized rule to validate that no row exists with out Phone or E-mail.
  21. Enter the next customized rule within the rule editor:
    CustomSql "choose rely(*) from main the place Phone is null and E-mail is null" = 0


    The Consider Knowledge High quality characteristic gives actions to handle the end result of a job based mostly on the job high quality outcomes.

  22. For this put up, choose Fail job when information high quality fails and select Fail job with out loading goal information actions. Within the Knowledge high quality output setting part, select Browse S3 and navigate to prefix dqresults within the S3 bucket beginning with gluedataqualitystudio-*.
  23. On the Goal menu, select Amazon S3.
  24. Select the Knowledge goal – S3 bucket node.
  25. On the Knowledge goal properties – S3 tab, for Format, select Parquet, and for Compression Sort, select Snappy.
  26. For S3 Goal Location, select Browse S3 and navigate to the prefix /datalake/curated/buyer/ within the S3 bucket beginning with gluedataqualitystudio-*.
  27. Select Save, then select Run.
    You’ll be able to view the job run particulars on the Runs tab. In our instance, the job fails with the error message “AssertionError: The job failed because of failing DQ guidelines for node: <node>.”
    You’ll be able to evaluate the info high quality outcome on the Knowledge high quality tab. In our instance, the customized information high quality validation failed as a result of one of many rows within the dataset had no Phone or E-mail worth.Consider Knowledge High quality outcomes can be written to the S3 bucket in JSON format based mostly on the info high quality outcome location parameter of the node.
  28. Navigate to dqresults prefix below the S3 bucket beginning gluedataqualitystudio-*. You will note that the info high quality result’s partitioned by date.

The next is the output of the JSON file. You need to use this file output to construct customized information high quality visualization dashboards.

You can even monitor the Consider Knowledge High quality node by way of Amazon CloudWatch metrics and set alarms to ship notifications about information high quality outcomes. To be taught extra on how you can arrange CloudWatch alarms, consult with Utilizing Amazon CloudWatch alarms.

Clear up

To keep away from incurring future prices and to scrub up unused roles and insurance policies, delete the sources you created:

  1. Delete the GlueDataQualityStudio job you created as a part of this put up.
  2. On the AWS CloudFormation console, delete the GlueDataQualityStudio stack.

Conclusion

AWS Glue Knowledge High quality gives a straightforward approach to measure and monitor the info high quality of your ETL pipeline. On this put up, you discovered how you can take mandatory actions based mostly on the info high quality outcomes, which helps you preserve excessive information requirements and make assured enterprise selections.

To be taught extra about AWS Glue Knowledge High quality, take a look at the documentation:


In regards to the Authors

Deenbandhu Prasad is a Senior Analytics Specialist at AWS, specializing in massive information providers. He’s keen about serving to prospects construct trendy information structure on the AWS Cloud. He has helped prospects of all sizes implement information administration, information warehouse, and information lake options.

Yannis Mentekidis is a Senior Software program Growth Engineer on the AWS Glue group.



Supply hyperlink

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments