Data Pipelines with Apache Airflow

Data Pipelines with Apache Airflow PDF

Author: Bas P. Harenslak

Publisher: Simon and Schuster

Published: 2021-04-27

Total Pages: 478

ISBN-13: 1617296902

DOWNLOAD EBOOK →

This book teaches you how to build and maintain effective data pipelines. Youll explore the most common usage patterns, including aggregating multiple data sources, connecting to and from data lakes, and cloud deployment. --

Data Pipelines Pocket Reference

Data Pipelines Pocket Reference PDF

Author: James Densmore

Publisher: O'Reilly Media

Published: 2021-02-10

Total Pages: 277

ISBN-13: 1492087807

DOWNLOAD EBOOK →

Data pipelines are the foundation for success in data analytics. Moving data from numerous diverse sources and transforming it to provide context is the difference between having data and actually gaining value from it. This pocket reference defines data pipelines and explains how they work in today's modern data stack. You'll learn common considerations and key decision points when implementing pipelines, such as batch versus streaming data ingestion and build versus buy. This book addresses the most common decisions made by data professionals and discusses foundational concepts that apply to open source frameworks, commercial products, and homegrown solutions. You'll learn: What a data pipeline is and how it works How data is moved and processed on modern data infrastructure, including cloud platforms Common tools and products used by data engineers to build pipelines How pipelines support analytics and reporting needs Considerations for pipeline maintenance, testing, and alerting

Building Machine Learning Pipelines

Building Machine Learning Pipelines PDF

Author: Hannes Hapke

Publisher: "O'Reilly Media, Inc."

Published: 2020-07-13

Total Pages: 398

ISBN-13: 1492053147

DOWNLOAD EBOOK →

Companies are spending billions on machine learning projects, but it’s money wasted if the models can’t be deployed effectively. In this practical guide, Hannes Hapke and Catherine Nelson walk you through the steps of automating a machine learning pipeline using the TensorFlow ecosystem. You’ll learn the techniques and tools that will cut deployment time from days to minutes, so that you can focus on developing new models rather than maintaining legacy systems. Data scientists, machine learning engineers, and DevOps engineers will discover how to go beyond model development to successfully productize their data science projects, while managers will better understand the role they play in helping to accelerate these projects. Understand the steps to build a machine learning pipeline Build your pipeline using components from TensorFlow Extended Orchestrate your machine learning pipeline with Apache Beam, Apache Airflow, and Kubeflow Pipelines Work with data using TensorFlow Data Validation and TensorFlow Transform Analyze a model in detail using TensorFlow Model Analysis Examine fairness and bias in your model performance Deploy models with TensorFlow Serving or TensorFlow Lite for mobile devices Learn privacy-preserving machine learning techniques

Data Engineering with Python

Data Engineering with Python PDF

Author: Paul Crickard

Publisher: Packt Publishing Ltd

Published: 2020-10-23

Total Pages: 357

ISBN-13: 1839212306

DOWNLOAD EBOOK →

Build, monitor, and manage real-time data pipelines to create data engineering infrastructure efficiently using open-source Apache projects Key Features Become well-versed in data architectures, data preparation, and data optimization skills with the help of practical examples Design data models and learn how to extract, transform, and load (ETL) data using Python Schedule, automate, and monitor complex data pipelines in production Book DescriptionData engineering provides the foundation for data science and analytics, and forms an important part of all businesses. This book will help you to explore various tools and methods that are used for understanding the data engineering process using Python. The book will show you how to tackle challenges commonly faced in different aspects of data engineering. You’ll start with an introduction to the basics of data engineering, along with the technologies and frameworks required to build data pipelines to work with large datasets. You’ll learn how to transform and clean data and perform analytics to get the most out of your data. As you advance, you'll discover how to work with big data of varying complexity and production databases, and build data pipelines. Using real-world examples, you’ll build architectures on which you’ll learn how to deploy data pipelines. By the end of this Python book, you’ll have gained a clear understanding of data modeling techniques, and will be able to confidently build data engineering pipelines for tracking data, running quality checks, and making necessary changes in production.What you will learn Understand how data engineering supports data science workflows Discover how to extract data from files and databases and then clean, transform, and enrich it Configure processors for handling different file formats as well as both relational and NoSQL databases Find out how to implement a data pipeline and dashboard to visualize results Use staging and validation to check data before landing in the warehouse Build real-time pipelines with staging areas that perform validation and handle failures Get to grips with deploying pipelines in the production environment Who this book is for This book is for data analysts, ETL developers, and anyone looking to get started with or transition to the field of data engineering or refresh their knowledge of data engineering using Python. This book will also be useful for students planning to build a career in data engineering or IT professionals preparing for a transition. No previous knowledge of data engineering is required.

Building Big Data Pipelines with Apache Beam

Building Big Data Pipelines with Apache Beam PDF

Author: Jan Lukavsky

Publisher: Packt Publishing Ltd

Published: 2022-01-21

Total Pages: 342

ISBN-13: 1800566565

DOWNLOAD EBOOK →

Implement, run, operate, and test data processing pipelines using Apache Beam Key FeaturesUnderstand how to improve usability and productivity when implementing Beam pipelinesLearn how to use stateful processing to implement complex use cases using Apache BeamImplement, test, and run Apache Beam pipelines with the help of expert tips and techniquesBook Description Apache Beam is an open source unified programming model for implementing and executing data processing pipelines, including Extract, Transform, and Load (ETL), batch, and stream processing. This book will help you to confidently build data processing pipelines with Apache Beam. You'll start with an overview of Apache Beam and understand how to use it to implement basic pipelines. You'll also learn how to test and run the pipelines efficiently. As you progress, you'll explore how to structure your code for reusability and also use various Domain Specific Languages (DSLs). Later chapters will show you how to use schemas and query your data using (streaming) SQL. Finally, you'll understand advanced Apache Beam concepts, such as implementing your own I/O connectors. By the end of this book, you'll have gained a deep understanding of the Apache Beam model and be able to apply it to solve problems. What you will learnUnderstand the core concepts and architecture of Apache BeamImplement stateless and stateful data processing pipelinesUse state and timers for processing real-time event processingStructure your code for reusabilityUse streaming SQL to process real-time data for increasing productivity and data accessibilityRun a pipeline using a portable runner and implement data processing using the Apache Beam Python SDKImplement Apache Beam I/O connectors using the Splittable DoFn APIWho this book is for This book is for data engineers, data scientists, and data analysts who want to learn how Apache Beam works. Intermediate-level knowledge of the Java programming language is assumed.

Data Science on AWS

Data Science on AWS PDF

Author: Chris Fregly

Publisher: "O'Reilly Media, Inc."

Published: 2021-04-07

Total Pages: 524

ISBN-13: 1492079367

DOWNLOAD EBOOK →

With this practical book, AI and machine learning practitioners will learn how to successfully build and deploy data science projects on Amazon Web Services. The Amazon AI and machine learning stack unifies data science, data engineering, and application development to help level upyour skills. This guide shows you how to build and run pipelines in the cloud, then integrate the results into applications in minutes instead of days. Throughout the book, authors Chris Fregly and Antje Barth demonstrate how to reduce cost and improve performance. Apply the Amazon AI and ML stack to real-world use cases for natural language processing, computer vision, fraud detection, conversational devices, and more Use automated machine learning to implement a specific subset of use cases with SageMaker Autopilot Dive deep into the complete model development lifecycle for a BERT-based NLP use case including data ingestion, analysis, model training, and deployment Tie everything together into a repeatable machine learning operations pipeline Explore real-time ML, anomaly detection, and streaming analytics on data streams with Amazon Kinesis and Managed Streaming for Apache Kafka Learn security best practices for data science projects and workflows including identity and access management, authentication, authorization, and more

Learn Docker in a Month of Lunches

Learn Docker in a Month of Lunches PDF

Author: Elton Stoneman

Publisher: Manning Publications

Published: 2020-08-04

Total Pages: 462

ISBN-13: 1617297054

DOWNLOAD EBOOK →

Summary Go from zero to production readiness with Docker in 22 bite-sized lessons! Learn Docker in a Month of Lunches is an accessible task-focused guide to Docker on Linux, Windows, or Mac systems. In it, you’ll learn practical Docker skills to help you tackle the challenges of modern IT, from cloud migration and microservices to handling legacy systems. There’s no excessive theory or niche-use cases—just a quick-and-easy guide to the essentials of Docker you’ll use every day. Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications. About the technology The idea behind Docker is simple: package applica­tions in lightweight virtual containers that can be easily installed. The results of this simple idea are huge! Docker makes it possible to manage applications without creating custom infrastructures. Free, open source, and battle-tested, Docker has quickly become must-know technology for developers and administrators. About the book Learn Docker in a Month of Lunches introduces Docker concepts through a series of brief hands-on lessons. Follow­ing a learning path perfected by author Elton Stoneman, you’ll run containers by chapter 2 and package applications by chapter 3. Each lesson teaches a practical skill you can practice on Windows, macOS, and Linux systems. By the end of the month you’ll know how to containerize and run any kind of application with Docker. What's inside Package applications to run in containers Put containers into production Build optimized Docker images Run containerized apps at scale About the reader For IT professionals. No previous Docker experience required. About the author Elton Stoneman is a consultant, a former architect at Docker, a Microsoft MVP, and a Pluralsight author. Table of Contents PART 1 - UNDERSTANDING DOCKER CONTAINERS AND IMAGES 1. Before you begin 2. Understanding Docker and running Hello World 3. Building your own Docker images 4. Packaging applications from source code into Docker Images 5. Sharing images with Docker Hub and other registries 6. Using Docker volumes for persistent storage PART 2 - RUNNING DISTRIBUTED APPLICATIONS IN CONTAINERS 7. Running multi-container apps with Docker Compose 8. Supporting reliability with health checks and dependency checks 9. Adding observability with containerized monitoring 10. Running multiple environments with Docker Compose 11. Building and testing applications with Docker and Docker Compose PART 3 - RUNNING AT SCALE WITH A CONTAINER ORCHESTRATOR 12. Understanding orchestration: Docker Swarm and Kubernetes 13. Deploying distributed applications as stacks in Docker Swarm 14. Automating releases with upgrades and rollbacks 15. Configuring Docker for secure remote access and CI/CD 16. Building Docker images that run anywhere: Linux, Windows, Intel, and Arm PART 4 - GETTING YOUR CONTAINERS READY FOR PRODUCTION 17. Optimizing your Docker images for size, speed, and security 18. Application configuration management in containers 19. Writing and managing application logs with Docker 20. Controlling HTTP traffic to containers with a reverse proxy 21. Asynchronous communication with a message queue 22. Never the end

Building an Anonymization Pipeline

Building an Anonymization Pipeline PDF

Author: Luk Arbuckle

Publisher: O'Reilly Media

Published: 2020-04-13

Total Pages: 167

ISBN-13: 1492053406

DOWNLOAD EBOOK →

How can you use data in a way that protects individual privacy but still provides useful and meaningful analytics? With this practical book, data architects and engineers will learn how to establish and integrate secure, repeatable anonymization processes into their data flows and analytics in a sustainable manner. Luk Arbuckle and Khaled El Emam from Privacy Analytics explore end-to-end solutions for anonymizing device and IoT data, based on collection models and use cases that address real business needs. These examples come from some of the most demanding data environments, such as healthcare, using approaches that have withstood the test of time. Create anonymization solutions diverse enough to cover a spectrum of use cases Match your solutions to the data you use, the people you share it with, and your analysis goals Build anonymization pipelines around various data collection models to cover different business needs Generate an anonymized version of original data or use an analytics platform to generate anonymized outputs Examine the ethical issues around the use of anonymized data

Data Science in Production

Data Science in Production PDF

Author: Ben Weber

Publisher:

Published: 2020

Total Pages: 234

ISBN-13: 9781652064633

DOWNLOAD EBOOK →

Putting predictive models into production is one of the most direct ways that data scientists can add value to an organization. By learning how to build and deploy scalable model pipelines, data scientists can own more of the model production process and more rapidly deliver data products. This book provides a hands-on approach to scaling up Python code to work in distributed environments in order to build robust pipelines. Readers will learn how to set up machine learning models as web endpoints, serverless functions, and streaming pipelines using multiple cloud environments. It is intended for analytics practitioners with hands-on experience with Python libraries such as Pandas and scikit-learn, and will focus on scaling up prototype models to production. From startups to trillion dollar companies, data science is playing an important role in helping organizations maximize the value of their data. This book helps data scientists to level up their careers by taking ownership of data products with applied examples that demonstrate how to: Translate models developed on a laptop to scalable deployments in the cloud Develop end-to-end systems that automate data science workflows Own a data product from conception to production The accompanying Jupyter notebooks provide examples of scalable pipelines across multiple cloud environments, tools, and libraries (github.com/bgweber/DS_Production). Book Contents Here are the topics covered by Data Science in Production: Chapter 1: Introduction - This chapter will motivate the use of Python and discuss the discipline of applied data science, present the data sets, models, and cloud environments used throughout the book, and provide an overview of automated feature engineering. Chapter 2: Models as Web Endpoints - This chapter shows how to use web endpoints for consuming data and hosting machine learning models as endpoints using the Flask and Gunicorn libraries. We'll start with scikit-learn models and also set up a deep learning endpoint with Keras. Chapter 3: Models as Serverless Functions - This chapter will build upon the previous chapter and show how to set up model endpoints as serverless functions using AWS Lambda and GCP Cloud Functions. Chapter 4: Containers for Reproducible Models - This chapter will show how to use containers for deploying models with Docker. We'll also explore scaling up with ECS and Kubernetes, and building web applications with Plotly Dash. Chapter 5: Workflow Tools for Model Pipelines - This chapter focuses on scheduling automated workflows using Apache Airflow. We'll set up a model that pulls data from BigQuery, applies a model, and saves the results. Chapter 6: PySpark for Batch Modeling - This chapter will introduce readers to PySpark using the community edition of Databricks. We'll build a batch model pipeline that pulls data from a data lake, generates features, applies a model, and stores the results to a No SQL database. Chapter 7: Cloud Dataflow for Batch Modeling - This chapter will introduce the core components of Cloud Dataflow and implement a batch model pipeline for reading data from BigQuery, applying an ML model, and saving the results to Cloud Datastore. Chapter 8: Streaming Model Workflows - This chapter will introduce readers to Kafka and PubSub for streaming messages in a cloud environment. After working through this material, readers will learn how to use these message brokers to create streaming model pipelines with PySpark and Dataflow that provide near real-time predictions. Excerpts of these chapters are available on Medium (@bgweber), and a book sample is available on Leanpub.

Designing Cloud Data Platforms

Designing Cloud Data Platforms PDF

Author: Danil Zburivsky

Publisher: Simon and Schuster

Published: 2021-03-17

Total Pages: 334

ISBN-13: 1638350965

DOWNLOAD EBOOK →

In Designing Cloud Data Platforms, Danil Zburivsky and Lynda Partner reveal a six-layer approach that increases flexibility and reduces costs. Discover patterns for ingesting data from a variety of sources, then learn to harness pre-built services provided by cloud vendors. Summary Centralized data warehouses, the long-time defacto standard for housing data for analytics, are rapidly giving way to multi-faceted cloud data platforms. Companies that embrace modern cloud data platforms benefit from an integrated view of their business using all of their data and can take advantage of advanced analytic practices to drive predictions and as yet unimagined data services. Designing Cloud Data Platforms is a hands-on guide to envisioning and designing a modern scalable data platform that takes full advantage of the flexibility of the cloud. As you read, you’ll learn the core components of a cloud data platform design, along with the role of key technologies like Spark and Kafka Streams. You’ll also explore setting up processes to manage cloud-based data, keep it secure, and using advanced analytic and BI tools to analyze it. Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications. About the technology Well-designed pipelines, storage systems, and APIs eliminate the complicated scaling and maintenance required with on-prem data centers. Once you learn the patterns for designing cloud data platforms, you’ll maximize performance no matter which cloud vendor you use. About the book In Designing Cloud Data Platforms, Danil Zburivsky and Lynda Partner reveal a six-layer approach that increases flexibility and reduces costs. Discover patterns for ingesting data from a variety of sources, then learn to harness pre-built services provided by cloud vendors. What's inside Best practices for structured and unstructured data sets Cloud-ready machine learning tools Metadata and real-time analytics Defensive architecture, access, and security About the reader For data professionals familiar with the basics of cloud computing, and Hadoop or Spark. About the author Danil Zburivsky has over 10 years of experience designing and supporting large-scale data infrastructure for enterprises across the globe. Lynda Partner is the VP of Analytics-as-a-Service at Pythian, and has been on the business side of data for over 20 years. Table of Contents 1 Introducing the data platform 2 Why a data platform and not just a data warehouse 3 Getting bigger and leveraging the Big 3: Amazon, Microsoft Azure, and Google 4 Getting data into the platform 5 Organizing and processing data 6 Real-time data processing and analytics 7 Metadata layer architecture 8 Schema management 9 Data access and security 10 Fueling business value with data platforms