最新消息:请大家多多支持

Spark, Ray, and Python for Scalable Data Science

其他教程 dsgsd 151浏览 0评论

MP4 | Video: AVC 1280 x 720 | Audio: AAC 48 Khz 2ch | Duration: 07:01:37 | 8.57 GB
Genre: eLearning | Language: English

Conceptual overviews and code-along sessions get you scaling up your data science projects using Spark, Ray, and Python.

Overview

Machine learning is moving from futuristic AI projects to data analysis on your desk. You need to go beyond following along in discussions to coding machine learning tasks. Spark, Ray, and Python for Scalable Data Science LiveLessons show you how to scale machine learning and artificial intelligence projects using Python, Spark, and Ray.

About the Instructor

Jonathan Dinu is the founder of Zipfian Academy—an advanced immersive training program for data scientists and data engineers in San Francisco—and served as its CAO/CTO before it was acquired by Galvanize, where he now is the VP of Academic Excellence. He first discovered his love of all things data while studying Computer Science and Physics at UC Berkeley, and in a former life he worked for Alpine Data Labs developing distributed machine learning algorithms for predictive analytics on Hadoop.

Jonathan is a dedicated educator, author, and speaker with a passion for sharing the things he has learned in the most creative ways he can. He has run data science workshops at Strata and PyData (among others), built a Data Visualization course with Udacity, and served on the UC Berkeley Extension Data Science Advisory Board. Currently he is writing a book on practical Data Science applications using Python. When he is not working with students, you can find him blogging about data, visualization, and education at

Skill Level
Beginner to Intermediate
Learn How To
Integrate Python and distributed computing
Scale data processing with Spark
Conduct exploratory data analysis with PySpark
Utilize parallel computing with Ray
Scale machine learning and artificial intelligence applications with Ray
Who Should Take This Course
This course is a good fit for anyone who needs to improve their fundamental understanding of scalable data processing integrated with Python for use in machine learning or artificial intelligence applications.
Course Requirements
A basic understanding of programming in Python (variables, basic control flow, simple scripts).
Familiarity with the vocabulary of data processing at scale, machine learning (dataset, training set, test set, model), and AI.
Lesson Descriptions

Lesson 1: Introduction to Distributed Computing in Python

Lesson 1 starts with an introduction to the data science process and workflow. It then turns to a bit of history on why frameworks like Spark and Ray are necessary. Next comes a short primer on distributed systems theory. Python-based distributed computing frameworks come up next. Finally, Jonathan begins to explain the Spark ecosystem as well as how Spark compares to Ray.

Lesson 2: Scaling Data Processing with Spark

Lesson 2 goes into detail on the Spark framework beginning with a “Hello World” example of programming with Spark. Then Jonathan turns to the Spark APIs. You get some experience with one of Spark’s primary data structures, the resilient distributed dataset (RDD). Next is key-value pairs and how Spark does operations on them similar to MapReduce. The lesson finishes up with a bit of Spark internals and the overall Spark application lifecycle.

Lesson 3: Exploratory Data Analysis with PySpark

In Lesson 3, Jonathan continues using Spark but now in the context of a larger data science workflow centered around natural language processing (NLP). He starts off with a general introduction to exploratory data analysis (EDA), followed by a quick tour of Jupyter notebooks. Next he discusses how to do EDA with Spark at scale, and then he shows you how to create statistics and data visualizations to summarize data sets. Finally, he tackles the NLP example, showing you how to transform a large corpus of text into numerical representation suitable for machine learning.

Lesson 4: Parallel Computing with Ray

Lesson 4 introduces the Ray programming API, with Jonathan comparing the similarities and differences between the Ray and Spark APIs. You learn how you can distribute functions with Ray, as well as how you can perform operations with distributed classes or objects with Ray actors. Finally, Jonathan finishes up with a large scale simulation to highlight the strengths of the Ray framework.

Lesson 5: Scaling AI Applications with Ray

Lesson 5 discusses how Ray enables you to scale up machine learning and artificial intelligence applications with Python. The lesson starts with the general model training and evaluation process in Python. Then it turns to how Ray enables you to scale both the evaluation and tuning of our models. You see how Ray makes possible very efficient hyperparameter tuning. You also see how, once you have a trained model, Ray can serve predictions from your machine learning model. Finally, the lesson finishes with an introduction to how Ray can enable you to both deploy machine learning models to production and monitor them once they are there.


Password/解压密码0daydown

Download rapidgator

Download nitroflare

资源下载此资源仅限VIP下载,请先

转载请注明:0daytown » Spark, Ray, and Python for Scalable Data Science

您必须 登录 才能发表评论!