Skip to content

yogitasn/DataEngineeringAWS

Repository files navigation

Introduction

A music streaming startup, Sparkify, has grown their user base and song database even more and want to move their data warehouse to a data lake. Their data resides in S3, in a directory of JSON logs on user activity on the app, as well as a directory with JSON metadata on the songs in their app.

An ETL pipeline is built that extracts their data from S3, processes them using Spark, and loads the data back into S3 as a set of dimensional tables. This will allow their analytics team to continue finding insights in what songs their users are listening to.

The fact table 'songplays' and dimension tables schema 'users','songs','artists' and 'time' are created in etl.py
The log and song data is loaded from s3 into song and log tables The final table 'songplays' fetches the song and artist information from song and log table. The other dimension tables 'user','songs','artists' and 'time' data is also inserted either from song or log tables
Perform following steps to get data into the tables

  1. Mention the appropriate AWS credentials in the dwh.cfg file
  2. Define the schema, load data from s3 and insert them in final tables in etl.py
  3. python etl.py
  4. This python file will load the staging tables and insert records in final tables and save them parquet files for analysis

The below final song,artist and user tables has the details loaded from song and log table

Screenshot

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages