Reading this book will empower you to take advantage of Apache Spark to optimize your data pipelines and teach you to craft modular and testable Spark applications. You will create and deploy mission-critical streaming spark applications in a low-stress environment that paves the way for your own path to production.
What You Will Learn
- Simplify data transformation with Spark Pipelines and Spark SQL
- Bridge data engineering with machine learning
- Architect modular data pipeline applications
- Build reusable application components and libraries
- Containerize your Spark applications for consistency and reliability
- Use Docker and Kubernetes to deploy your Spark applications
- Speed up application experimentation using Apache Zeppelin and Docker
- Understand serializable structured data and data contracts
- Harness effective strategies for optimizing data in your data lakes
- Build end-to-end Spark structured streaming applications using Redis and Apache Kafka
- Embrace testing for your batch and streaming applications
- Deploy and monitor your Spark applications
Who This Book Is For
Professional software engineers who want to take their current skills and apply them to new and exciting opportunities within the data ecosystem, practicing data engineers who are looking for a guiding light while traversing the many challenges of moving from batch to streaming modes, data architects who wish to provide clear and concise direction for how best to harness and use Apache Spark within their organization, and those interested in the ins and outs of becoming a modern data engineer in today's fast-paced and data-hungry world