2022-10-11 08:27:32 -05:00
|
|
|
#!/usr/bin/env bash
|
[SPARK-32204][SPARK-32182][DOCS] Add a quickstart page with Binder integration in PySpark documentation
### What changes were proposed in this pull request?
This PR proposes to:
- add a notebook with a Binder integration which allows users to try PySpark in a live notebook. Please [try this here](https://mybinder.org/v2/gh/HyukjinKwon/spark/SPARK-32204?filepath=python%2Fdocs%2Fsource%2Fgetting_started%2Fquickstart.ipynb).
- reuse this notebook as a quickstart guide in PySpark documentation.
Note that Binder turns a Git repo into a collection of interactive notebooks. It works based on Docker image. Once somebody builds, other people can reuse the image against a specific commit.
Therefore, if we run Binder with the images based on released tags in Spark, virtually all users can instantly launch the Jupyter notebooks.
<br/>
I made a simple demo to make it easier to review. Please see:
- [Main page](https://hyukjin-spark.readthedocs.io/en/stable/). Note that the link ("Live Notebook") in the main page wouldn't work since this PR is not merged yet.
- [Quickstart page](https://hyukjin-spark.readthedocs.io/en/stable/getting_started/quickstart.html)
<br/>
When reviewing the notebook file itself, please give my direct feedback which I will appreciate and address.
Another way might be:
- open [here](https://mybinder.org/v2/gh/HyukjinKwon/spark/SPARK-32204?filepath=python%2Fdocs%2Fsource%2Fgetting_started%2Fquickstart.ipynb).
- edit / change / update the notebook. Please feel free to change as whatever you want. I can apply as are or slightly update more when I apply to this PR.
- download it as a `.ipynb` file:

- upload the `.ipynb` file here in a GitHub comment. Then, I will push a commit with that file with crediting correctly, of course.
- alternatively, push a commit into this PR right away if that's easier for you (if you're a committer).
References:
- https://pandas.pydata.org/pandas-docs/stable/user_guide/10min.html
- https://databricks.com/jp/blog/2020/03/31/10-minutes-from-pandas-to-koalas-on-apache-spark.html - my own blog post .. :-) and https://koalas.readthedocs.io/en/latest/getting_started/10min.html
### Why are the changes needed?
To improve PySpark's usability. The current quickstart for Python users are very friendly.
### Does this PR introduce _any_ user-facing change?
Yes, it will add a documentation page, and expose a live notebook to PySpark users.
### How was this patch tested?
Manually tested, and GitHub Actions builds will test.
Closes #29491 from HyukjinKwon/SPARK-32204.
Lead-authored-by: HyukjinKwon <gurwls223@apache.org>
Co-authored-by: Fokko Driesprong <fokko@apache.org>
Signed-off-by: HyukjinKwon <gurwls223@apache.org>
2020-08-26 12:23:24 +09:00
|
|
|
|
|
|
|
|
#
|
|
|
|
|
# Licensed to the Apache Software Foundation (ASF) under one or more
|
|
|
|
|
# contributor license agreements. See the NOTICE file distributed with
|
|
|
|
|
# this work for additional information regarding copyright ownership.
|
|
|
|
|
# The ASF licenses this file to You under the Apache License, Version 2.0
|
|
|
|
|
# (the "License"); you may not use this file except in compliance with
|
|
|
|
|
# the License. You may obtain a copy of the License at
|
|
|
|
|
#
|
|
|
|
|
# http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
|
#
|
|
|
|
|
# Unless required by applicable law or agreed to in writing, software
|
|
|
|
|
# distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
|
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
|
# See the License for the specific language governing permissions and
|
|
|
|
|
# limitations under the License.
|
|
|
|
|
#
|
|
|
|
|
|
|
|
|
|
# This file is used for Binder integration to install PySpark available in
|
|
|
|
|
# Jupyter notebook.
|
|
|
|
|
|
2023-10-27 21:20:40 +09:00
|
|
|
# SPARK-45706: Should fail fast. Otherwise, the Binder image is successfully
|
|
|
|
|
# built, and it cannot be rebuilt.
|
|
|
|
|
set -o pipefail
|
|
|
|
|
set -e
|
|
|
|
|
|
2023-04-17 11:49:40 +09:00
|
|
|
VERSION=$(python -c "exec(open('python/pyspark/version.py').read()); print(__version__)")
|
2024-08-27 14:51:02 +09:00
|
|
|
TAG=$(git describe --tags --exact-match 2> /dev/null || true)
|
[SPARK-37170][PYTHON][DOCS] Pin PySpark version installed in the Binder environment for tagged commit
### What changes were proposed in this pull request?
<!--
Please clarify what changes you are proposing. The purpose of this section is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster reviews in your PR. See the examples below.
1. If you refactor some codes with changing classes, showing the class hierarchy will help reviewers.
2. If you fix some SQL features, you can provide some references of other DBMSes.
3. If there is design documentation, please add the link.
4. If there is a discussion in the mailing list, please add the link.
-->
This PR proposes to pin the version of PySpark to be installed in the live notebook environment for tagged commits.
### Why are the changes needed?
<!--
Please clarify why the changes are needed. For instance,
1. If you propose a new API, clarify the use case for a new API.
2. If you fix a bug, you can clarify why it is a bug.
-->
I noticed that the PySpark `3.1.2` is installed in the live notebook environment even though the notebook is for PySpark `3.2.0`.
http://spark.apache.org/docs/3.2.0/api/python/getting_started/index.html
I guess someone accessed to Binder and built the container image with `v3.2.0` before we published the `pyspark` package to PyPi.
https://mybinder.org/
I think it's difficult to rebuild the image manually.
To avoid such accident, I'll propose this change.
### Does this PR introduce _any_ user-facing change?
<!--
Note that it means *any* user-facing change including all aspects such as the documentation fix.
If yes, please clarify the previous behavior and the change this PR proposes - provide the console output, description and/or an example to show the behavior difference if possible.
If possible, please also clarify if this is a user-facing change compared to the released Spark versions or within the unreleased branches such as master.
If no, write 'No'.
-->
No.
### How was this patch tested?
<!--
If tests were added, say they were added here. Please make sure to add some test cases that check the changes thoroughly including negative and positive cases if possible.
If it was tested in a way different from regular unit tests, please clarify how you tested step by step, ideally copy and paste-able, so that other reviewers can test and check, and descendants can verify in the future.
If tests were not added, please describe why they were not added and/or why it was difficult to add.
If benchmark tests were added, please run the benchmarks in GitHub Actions for the consistent environment, and the instructions could accord to: https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
-->
Confirmed that if a commit is tagged, we can avoid building the container image with unexpected version of `pyspark` in Binder.
```
...
Downloading plotly-5.3.1-py2.py3-none-any.whl (23.9 MB)
[91mERROR: Could not find a version that satisfies the requirement pyspark[ml,mllib,pandas_on_spark,sql]==3.3.0.dev0 (from versions: 2.1.2, 2.1.3, 2.2.0.post0, 2.2.1, 2.2.2, 2.2.3, 2.3.0, 2.3.1, 2.3.2, 2.3.3, 2.3.4, 2.4.0, 2.4.1, 2.4.2, 2.4.3, 2.4.4, 2.4.5, 2.4.6, 2.4.7, 2.4.8, 3.0.0, 3.0.1, 3.0.2, 3.0.3, 3.1.1, 3.1.2, 3.2.0)
ERROR: No matching distribution found for pyspark[ml,mllib,pandas_on_spark,sql]==3.3.0.dev0
[0mRemoving intermediate container de55eed5966e
The command '/bin/sh -c ./binder/postBuild' returned a non-zero code: 1Built image, launching...
Failed to connect to event stream
```
If a commit is not tagged, an old version of `pyspark` can be installed if the exactly specified version is not published to PyPi.
https://hub.gke2.mybinder.org/user/sarutak-spark-ky222nbf/notebooks/python/docs/source/getting_started/quickstart_df.ipynb
Closes #34449 from sarutak/pin-pyspark-version-binder.
Authored-by: Kousuke Saruta <sarutak@oss.nttdata.com>
Signed-off-by: Hyukjin Kwon <gurwls223@apache.org>
2021-11-01 10:03:39 +09:00
|
|
|
|
|
|
|
|
# If a commit is tagged, exactly specified version of pyspark should be installed to avoid
|
|
|
|
|
# a kind of accident that an old version of pyspark is installed in the live notebook environment.
|
|
|
|
|
# See SPARK-37170
|
|
|
|
|
if [ -n "$TAG" ]; then
|
|
|
|
|
SPECIFIER="=="
|
|
|
|
|
else
|
|
|
|
|
SPECIFIER="<="
|
|
|
|
|
fi
|
|
|
|
|
|
2025-02-16 22:28:08 -08:00
|
|
|
pip install plotly "pandas<2.0.0" "pyspark[sql,ml,mllib,pandas_on_spark,connect]$SPECIFIER$VERSION"
|
2021-12-13 17:55:45 +09:00
|
|
|
|
2023-02-24 11:32:53 +09:00
|
|
|
# Add sbin to PATH to run `start-connect-server.sh`.
|
|
|
|
|
SPARK_HOME=$(python -c "from pyspark.find_spark_home import _find_spark_home; print(_find_spark_home())")
|
|
|
|
|
echo "export PATH=${PATH}:${SPARK_HOME}/sbin" >> ~/.profile
|
2023-04-17 09:49:33 +09:00
|
|
|
echo "export SPARK_HOME=${SPARK_HOME}" >> ~/.profile
|
2023-02-24 11:32:53 +09:00
|
|
|
|
|
|
|
|
# Add Spark version to env for running command dynamically based on Spark version.
|
|
|
|
|
SPARK_VERSION=$(python -c "import pyspark; print(pyspark.__version__)")
|
|
|
|
|
echo "export SPARK_VERSION=${SPARK_VERSION}" >> ~/.profile
|
|
|
|
|
|
2023-10-21 16:37:27 -05:00
|
|
|
# Suppress warnings from Spark jobs, and UI progress bar.
|
2021-12-13 17:55:45 +09:00
|
|
|
mkdir -p ~/.ipython/profile_default/startup
|
2023-11-16 11:54:53 +09:00
|
|
|
echo "from pyspark.sql import SparkSession
|
2021-12-13 17:55:45 +09:00
|
|
|
SparkSession.builder.config('spark.ui.showConsoleProgress', 'false').getOrCreate().sparkContext.setLogLevel('FATAL')
|
2023-11-16 11:54:53 +09:00
|
|
|
" > ~/.ipython/profile_default/startup/00-init.py
|