SIGN IN SIGN UP
apache / arrow UNCLAIMED

Apache Arrow is the universal columnar format and multi-language toolbox for fast data interchange and in-memory analytics

0 0 16 C++
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
from cpython.pycapsule cimport PyCapsule_CheckExact, PyCapsule_GetPointer, PyCapsule_New
from collections.abc import Sequence
import os
import warnings
from cython import sizeof
cdef extern from "<variant>" namespace "std":
c_bool holds_alternative[T](...)
T get[T](...)
ARROW-2814: [Python] Unify conversion paths for sequences of Python objects Key points * All object sequences, including NumPy arrays of objects are being converted in builtin_convert.cc * pyarrow.array can now yield chunked output from normal Python input. Before, we could overflow a BinaryBuilder with no recourse * Eliminated virtual calls from the inner hot path * Eliminated some code duplication in builtin_convert.cc * Special-cased mask handling, so masks (`mask=...` in `pyarrow.array`) also work with plain Python sequence now instead of only NumPy arrays * Centralized null checking to a single code path, with a compile-time switch between pandas-style and non-pandas null-checking Some issues I ran into: * We have tests that make the somewhat heavy-handed promotion of small NumPy scalars to int64 or uint64. I have added more rigid "type unification" for dtypes, so that now a sequence of int8 scalars will yield int8 result * We were implicitly casting integers to double without checking whether the integers are representable as doubles. I think implicit casting is OK (e.g. `pa.array([1.5, 1, None])`) but we should validate that we can't discarding information There are some other problems that need fixing still / inconsistencies from the two code paths or follow-up issues. I have created a number of follow up JIRAs and added a number of new unit tests Author: Wes McKinney <wesm+git@apache.org> Closes #2366 from wesm/ARROW-2814 and squashes the following commits: 9d15551c <Wes McKinney> Address further code review comments a7a8c3ce <Wes McKinney> Check in new source files d7760cef <Wes McKinney> Address @pitrou code review comments 3f56c300 <Wes McKinney> Exclude python/iterators.h from C++/CLI lint checks d1687720 <Wes McKinney> Fix some more things df136064 <Wes McKinney> Miscellaneous micro-optimizations 07ff8094 <Wes McKinney> Bump versions in asv.conf.json 9efb097e <Wes McKinney> Add more unit tests, sand rough edges. Add boundschecking for integer coercion with float32 e0c9b9ce <Wes McKinney> Delete casting cruft a13bcaf1 <Wes McKinney> Fix rest of unit tests 2b3815f3 <Wes McKinney> Loose and string utf8 type conversions a04bcdc2 <Wes McKinney> Fix more unit tests, disallow non-boolean mask 688b8298 <Wes McKinney> Implement NumPy dtype unifier helper class. Some more cleanup d9d0822e <Wes McKinney> Add NumPy concrete type checking logic d3d97eaf <Wes McKinney> Fix NumPy float scalar casting issue f3b3e2f9 <Wes McKinney> Code fully compiles again e8e5964c <Wes McKinney> First pass cleaning up ListConverter 4424c62c <Wes McKinney> Remove comments c5ca7a42 <Wes McKinney> More refactoring, cleaning up old code. Add lambda version of VIsitTypeInline 1c714d35 <Wes McKinney> Delete some ConvertLists code b4fdea0c <Wes McKinney> Refactoring, add VisitSequenceMasked d75adaf2 <Wes McKinney> More refactoring 72de8a3d <Wes McKinney> Templatize more, less code duplication 72e6574e <Wes McKinney> Do not make virtual AppendSingle/AppendMultiple calls for non-nested SeqConverter 1c338204 <Wes McKinney> Move over NumPyConverter code, small refactorings. Now very broken 58db0964 <Wes McKinney> Fix buglets and mixing dicts/scalars raises TypeError for now c5428d5b <Wes McKinney> Consolidate to a single ConvertPySequence entry point 79cd77e9 <Wes McKinney> Add short circuit option, some small refactoring
2018-08-09 13:31:10 -04:00
cdef _sequence_to_array(object sequence, object mask, object size,
DataType type, CMemoryPool* pool, c_bool from_pandas):
ARROW-9992: [C++][Python] Refactor python to arrow conversions based on a reusable conversion API ### Targets of the refactoring: - PythonToArrow converters based on a common API - PyBytesView to use `Result` return values and contain `is_utf8` flag - PyConversionOptions is now available from all converters so we can honor its flags ### Fixes - ARROW-9993 [Python] Tzinfo - string roundtrip fails on pytz.StaticTzInfo objects - ARROW-9994 [C++][Python] Auto chunking nested array containing binary-like fields result malformed output - ARROW-9996 [C++] Dictionary is unset when calling DictionaryArray.GetScalar for null values - ~ARROW-9997 [Python] StructScalar.as_py() fails if the type has duplicate field names~ - ARROW-9999 [Python] Support constructing dictionary array directly through pa.array() - ARROW-10000 [C++][Python] Support constructing StructArray from list of key-value pairs - ARROW-9593 [Python] Add custom pickle reducers for DictionaryScalar - ARROW-6281 [Python] Produce chunked arrays for nested types in pyarrow.array - ARROW-2367 [Python] ListArray has trouble with sizes greater than kMaximumCapacity - ARROW-9976: [Python] ArrowCapacityError when doing Table.from_pandas with large dataframe ### Backward incompatibility ~~Since a struct type can contain duplicated field names we cannot return a struct scalar as a mapping, so I had to change the `.as_py()` representation to return with a list of key-value pairs.~~ ### TODOs: - [x] ensure that the large memory tests are passing - [x] benchmark and check binary size again ### Library size Before: ``` 12M Sep 25 15:05 libarrow.200.0.0.dylib 2.7M Sep 25 15:07 libarrow_python.200.0.0.dylib ``` After: ``` 12M Sep 25 15:46 libarrow.200.0.0.dylib 2.1M Sep 25 15:50 libarrow_python.200.0.0.dylib ``` ### Benchmarks Executed the following ASV benchmark: ```bash asv continuous --bench convert_builtins master py2ar --no-only-changed --split ``` After some optimization: ``` Benchmarks that have improved: before after ratio [f358a29b] [18d1c052] <master> <py2ar> - 2.78±0.03ms 2.45±0.03ms 0.88 convert_builtins.ConvertPyListToArray.time_convert('bool') - 3.59±0.01ms 3.12±0.02ms 0.87 convert_builtins.ConvertPyListToArray.time_convert('int32') - 3.37±0.01ms 2.73±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('uint32') - 3.74±0.02ms 3.03±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('int64') - 3.38±0.01ms 2.69±0.01ms 0.80 convert_builtins.ConvertPyListToArray.time_convert('uint64') - 2.83±0.01ms 2.24±0.01ms 0.79 convert_builtins.ConvertPyListToArray.time_convert('float32') - 3.92±0.02ms 2.99±0.02ms 0.76 convert_builtins.ConvertPyListToArray.time_convert('binary10') - 14.1±0.04ms 8.89±0.05ms 0.63 convert_builtins.ConvertPyListToArray.time_convert('unicode') - 5.60±0.01ms 3.24±0.03ms 0.58 convert_builtins.ConvertPyListToArray.time_convert('ascii') - 5.37±0.02ms 2.91±0.04ms 0.54 convert_builtins.ConvertPyListToArray.time_convert('binary') Benchmarks that have stayed the same: before after ratio [f358a29b] [18d1c052] <master> <py2ar> 14.8±0.02ms 15.5±0.1ms 1.05 convert_builtins.ConvertPyListToArray.time_convert('decimal') 16.4±0.7ms 15.1±0.6ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('struct from tuples') 34.4±0.3ms 31.5±0.4ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('int64 list') 16.7±0.7ms 15.1±0.6ms ~0.91 convert_builtins.ConvertPyListToArray.time_convert('struct') 2.42±0.02ms 2.05±0.03ms ~0.85 convert_builtins.ConvertPyListToArray.time_convert('float64') ``` Closes #8088 from kszucs/py2ar Authored-by: Krisztián Szűcs <szucs.krisztian@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2020-09-25 20:49:16 -04:00
cdef:
int64_t c_size
PyConversionOptions options
shared_ptr[CChunkedArray] chunked
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
ARROW-2814: [Python] Unify conversion paths for sequences of Python objects Key points * All object sequences, including NumPy arrays of objects are being converted in builtin_convert.cc * pyarrow.array can now yield chunked output from normal Python input. Before, we could overflow a BinaryBuilder with no recourse * Eliminated virtual calls from the inner hot path * Eliminated some code duplication in builtin_convert.cc * Special-cased mask handling, so masks (`mask=...` in `pyarrow.array`) also work with plain Python sequence now instead of only NumPy arrays * Centralized null checking to a single code path, with a compile-time switch between pandas-style and non-pandas null-checking Some issues I ran into: * We have tests that make the somewhat heavy-handed promotion of small NumPy scalars to int64 or uint64. I have added more rigid "type unification" for dtypes, so that now a sequence of int8 scalars will yield int8 result * We were implicitly casting integers to double without checking whether the integers are representable as doubles. I think implicit casting is OK (e.g. `pa.array([1.5, 1, None])`) but we should validate that we can't discarding information There are some other problems that need fixing still / inconsistencies from the two code paths or follow-up issues. I have created a number of follow up JIRAs and added a number of new unit tests Author: Wes McKinney <wesm+git@apache.org> Closes #2366 from wesm/ARROW-2814 and squashes the following commits: 9d15551c <Wes McKinney> Address further code review comments a7a8c3ce <Wes McKinney> Check in new source files d7760cef <Wes McKinney> Address @pitrou code review comments 3f56c300 <Wes McKinney> Exclude python/iterators.h from C++/CLI lint checks d1687720 <Wes McKinney> Fix some more things df136064 <Wes McKinney> Miscellaneous micro-optimizations 07ff8094 <Wes McKinney> Bump versions in asv.conf.json 9efb097e <Wes McKinney> Add more unit tests, sand rough edges. Add boundschecking for integer coercion with float32 e0c9b9ce <Wes McKinney> Delete casting cruft a13bcaf1 <Wes McKinney> Fix rest of unit tests 2b3815f3 <Wes McKinney> Loose and string utf8 type conversions a04bcdc2 <Wes McKinney> Fix more unit tests, disallow non-boolean mask 688b8298 <Wes McKinney> Implement NumPy dtype unifier helper class. Some more cleanup d9d0822e <Wes McKinney> Add NumPy concrete type checking logic d3d97eaf <Wes McKinney> Fix NumPy float scalar casting issue f3b3e2f9 <Wes McKinney> Code fully compiles again e8e5964c <Wes McKinney> First pass cleaning up ListConverter 4424c62c <Wes McKinney> Remove comments c5ca7a42 <Wes McKinney> More refactoring, cleaning up old code. Add lambda version of VIsitTypeInline 1c714d35 <Wes McKinney> Delete some ConvertLists code b4fdea0c <Wes McKinney> Refactoring, add VisitSequenceMasked d75adaf2 <Wes McKinney> More refactoring 72de8a3d <Wes McKinney> Templatize more, less code duplication 72e6574e <Wes McKinney> Do not make virtual AppendSingle/AppendMultiple calls for non-nested SeqConverter 1c338204 <Wes McKinney> Move over NumPyConverter code, small refactorings. Now very broken 58db0964 <Wes McKinney> Fix buglets and mixing dicts/scalars raises TypeError for now c5428d5b <Wes McKinney> Consolidate to a single ConvertPySequence entry point 79cd77e9 <Wes McKinney> Add short circuit option, some small refactoring
2018-08-09 13:31:10 -04:00
if type is not None:
options.type = type.sp_type
if size is not None:
options.size = size
options.from_pandas = from_pandas
options.ignore_timezone = os.environ.get('PYARROW_IGNORE_TIMEZONE', False)
ARROW-2814: [Python] Unify conversion paths for sequences of Python objects Key points * All object sequences, including NumPy arrays of objects are being converted in builtin_convert.cc * pyarrow.array can now yield chunked output from normal Python input. Before, we could overflow a BinaryBuilder with no recourse * Eliminated virtual calls from the inner hot path * Eliminated some code duplication in builtin_convert.cc * Special-cased mask handling, so masks (`mask=...` in `pyarrow.array`) also work with plain Python sequence now instead of only NumPy arrays * Centralized null checking to a single code path, with a compile-time switch between pandas-style and non-pandas null-checking Some issues I ran into: * We have tests that make the somewhat heavy-handed promotion of small NumPy scalars to int64 or uint64. I have added more rigid "type unification" for dtypes, so that now a sequence of int8 scalars will yield int8 result * We were implicitly casting integers to double without checking whether the integers are representable as doubles. I think implicit casting is OK (e.g. `pa.array([1.5, 1, None])`) but we should validate that we can't discarding information There are some other problems that need fixing still / inconsistencies from the two code paths or follow-up issues. I have created a number of follow up JIRAs and added a number of new unit tests Author: Wes McKinney <wesm+git@apache.org> Closes #2366 from wesm/ARROW-2814 and squashes the following commits: 9d15551c <Wes McKinney> Address further code review comments a7a8c3ce <Wes McKinney> Check in new source files d7760cef <Wes McKinney> Address @pitrou code review comments 3f56c300 <Wes McKinney> Exclude python/iterators.h from C++/CLI lint checks d1687720 <Wes McKinney> Fix some more things df136064 <Wes McKinney> Miscellaneous micro-optimizations 07ff8094 <Wes McKinney> Bump versions in asv.conf.json 9efb097e <Wes McKinney> Add more unit tests, sand rough edges. Add boundschecking for integer coercion with float32 e0c9b9ce <Wes McKinney> Delete casting cruft a13bcaf1 <Wes McKinney> Fix rest of unit tests 2b3815f3 <Wes McKinney> Loose and string utf8 type conversions a04bcdc2 <Wes McKinney> Fix more unit tests, disallow non-boolean mask 688b8298 <Wes McKinney> Implement NumPy dtype unifier helper class. Some more cleanup d9d0822e <Wes McKinney> Add NumPy concrete type checking logic d3d97eaf <Wes McKinney> Fix NumPy float scalar casting issue f3b3e2f9 <Wes McKinney> Code fully compiles again e8e5964c <Wes McKinney> First pass cleaning up ListConverter 4424c62c <Wes McKinney> Remove comments c5ca7a42 <Wes McKinney> More refactoring, cleaning up old code. Add lambda version of VIsitTypeInline 1c714d35 <Wes McKinney> Delete some ConvertLists code b4fdea0c <Wes McKinney> Refactoring, add VisitSequenceMasked d75adaf2 <Wes McKinney> More refactoring 72de8a3d <Wes McKinney> Templatize more, less code duplication 72e6574e <Wes McKinney> Do not make virtual AppendSingle/AppendMultiple calls for non-nested SeqConverter 1c338204 <Wes McKinney> Move over NumPyConverter code, small refactorings. Now very broken 58db0964 <Wes McKinney> Fix buglets and mixing dicts/scalars raises TypeError for now c5428d5b <Wes McKinney> Consolidate to a single ConvertPySequence entry point 79cd77e9 <Wes McKinney> Add short circuit option, some small refactoring
2018-08-09 13:31:10 -04:00
with nogil:
ARROW-9992: [C++][Python] Refactor python to arrow conversions based on a reusable conversion API ### Targets of the refactoring: - PythonToArrow converters based on a common API - PyBytesView to use `Result` return values and contain `is_utf8` flag - PyConversionOptions is now available from all converters so we can honor its flags ### Fixes - ARROW-9993 [Python] Tzinfo - string roundtrip fails on pytz.StaticTzInfo objects - ARROW-9994 [C++][Python] Auto chunking nested array containing binary-like fields result malformed output - ARROW-9996 [C++] Dictionary is unset when calling DictionaryArray.GetScalar for null values - ~ARROW-9997 [Python] StructScalar.as_py() fails if the type has duplicate field names~ - ARROW-9999 [Python] Support constructing dictionary array directly through pa.array() - ARROW-10000 [C++][Python] Support constructing StructArray from list of key-value pairs - ARROW-9593 [Python] Add custom pickle reducers for DictionaryScalar - ARROW-6281 [Python] Produce chunked arrays for nested types in pyarrow.array - ARROW-2367 [Python] ListArray has trouble with sizes greater than kMaximumCapacity - ARROW-9976: [Python] ArrowCapacityError when doing Table.from_pandas with large dataframe ### Backward incompatibility ~~Since a struct type can contain duplicated field names we cannot return a struct scalar as a mapping, so I had to change the `.as_py()` representation to return with a list of key-value pairs.~~ ### TODOs: - [x] ensure that the large memory tests are passing - [x] benchmark and check binary size again ### Library size Before: ``` 12M Sep 25 15:05 libarrow.200.0.0.dylib 2.7M Sep 25 15:07 libarrow_python.200.0.0.dylib ``` After: ``` 12M Sep 25 15:46 libarrow.200.0.0.dylib 2.1M Sep 25 15:50 libarrow_python.200.0.0.dylib ``` ### Benchmarks Executed the following ASV benchmark: ```bash asv continuous --bench convert_builtins master py2ar --no-only-changed --split ``` After some optimization: ``` Benchmarks that have improved: before after ratio [f358a29b] [18d1c052] <master> <py2ar> - 2.78±0.03ms 2.45±0.03ms 0.88 convert_builtins.ConvertPyListToArray.time_convert('bool') - 3.59±0.01ms 3.12±0.02ms 0.87 convert_builtins.ConvertPyListToArray.time_convert('int32') - 3.37±0.01ms 2.73±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('uint32') - 3.74±0.02ms 3.03±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('int64') - 3.38±0.01ms 2.69±0.01ms 0.80 convert_builtins.ConvertPyListToArray.time_convert('uint64') - 2.83±0.01ms 2.24±0.01ms 0.79 convert_builtins.ConvertPyListToArray.time_convert('float32') - 3.92±0.02ms 2.99±0.02ms 0.76 convert_builtins.ConvertPyListToArray.time_convert('binary10') - 14.1±0.04ms 8.89±0.05ms 0.63 convert_builtins.ConvertPyListToArray.time_convert('unicode') - 5.60±0.01ms 3.24±0.03ms 0.58 convert_builtins.ConvertPyListToArray.time_convert('ascii') - 5.37±0.02ms 2.91±0.04ms 0.54 convert_builtins.ConvertPyListToArray.time_convert('binary') Benchmarks that have stayed the same: before after ratio [f358a29b] [18d1c052] <master> <py2ar> 14.8±0.02ms 15.5±0.1ms 1.05 convert_builtins.ConvertPyListToArray.time_convert('decimal') 16.4±0.7ms 15.1±0.6ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('struct from tuples') 34.4±0.3ms 31.5±0.4ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('int64 list') 16.7±0.7ms 15.1±0.6ms ~0.91 convert_builtins.ConvertPyListToArray.time_convert('struct') 2.42±0.02ms 2.05±0.03ms ~0.85 convert_builtins.ConvertPyListToArray.time_convert('float64') ``` Closes #8088 from kszucs/py2ar Authored-by: Krisztián Szűcs <szucs.krisztian@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2020-09-25 20:49:16 -04:00
chunked = GetResultValue(
ConvertPySequence(sequence, mask, options, pool)
)
ARROW-2814: [Python] Unify conversion paths for sequences of Python objects Key points * All object sequences, including NumPy arrays of objects are being converted in builtin_convert.cc * pyarrow.array can now yield chunked output from normal Python input. Before, we could overflow a BinaryBuilder with no recourse * Eliminated virtual calls from the inner hot path * Eliminated some code duplication in builtin_convert.cc * Special-cased mask handling, so masks (`mask=...` in `pyarrow.array`) also work with plain Python sequence now instead of only NumPy arrays * Centralized null checking to a single code path, with a compile-time switch between pandas-style and non-pandas null-checking Some issues I ran into: * We have tests that make the somewhat heavy-handed promotion of small NumPy scalars to int64 or uint64. I have added more rigid "type unification" for dtypes, so that now a sequence of int8 scalars will yield int8 result * We were implicitly casting integers to double without checking whether the integers are representable as doubles. I think implicit casting is OK (e.g. `pa.array([1.5, 1, None])`) but we should validate that we can't discarding information There are some other problems that need fixing still / inconsistencies from the two code paths or follow-up issues. I have created a number of follow up JIRAs and added a number of new unit tests Author: Wes McKinney <wesm+git@apache.org> Closes #2366 from wesm/ARROW-2814 and squashes the following commits: 9d15551c <Wes McKinney> Address further code review comments a7a8c3ce <Wes McKinney> Check in new source files d7760cef <Wes McKinney> Address @pitrou code review comments 3f56c300 <Wes McKinney> Exclude python/iterators.h from C++/CLI lint checks d1687720 <Wes McKinney> Fix some more things df136064 <Wes McKinney> Miscellaneous micro-optimizations 07ff8094 <Wes McKinney> Bump versions in asv.conf.json 9efb097e <Wes McKinney> Add more unit tests, sand rough edges. Add boundschecking for integer coercion with float32 e0c9b9ce <Wes McKinney> Delete casting cruft a13bcaf1 <Wes McKinney> Fix rest of unit tests 2b3815f3 <Wes McKinney> Loose and string utf8 type conversions a04bcdc2 <Wes McKinney> Fix more unit tests, disallow non-boolean mask 688b8298 <Wes McKinney> Implement NumPy dtype unifier helper class. Some more cleanup d9d0822e <Wes McKinney> Add NumPy concrete type checking logic d3d97eaf <Wes McKinney> Fix NumPy float scalar casting issue f3b3e2f9 <Wes McKinney> Code fully compiles again e8e5964c <Wes McKinney> First pass cleaning up ListConverter 4424c62c <Wes McKinney> Remove comments c5ca7a42 <Wes McKinney> More refactoring, cleaning up old code. Add lambda version of VIsitTypeInline 1c714d35 <Wes McKinney> Delete some ConvertLists code b4fdea0c <Wes McKinney> Refactoring, add VisitSequenceMasked d75adaf2 <Wes McKinney> More refactoring 72de8a3d <Wes McKinney> Templatize more, less code duplication 72e6574e <Wes McKinney> Do not make virtual AppendSingle/AppendMultiple calls for non-nested SeqConverter 1c338204 <Wes McKinney> Move over NumPyConverter code, small refactorings. Now very broken 58db0964 <Wes McKinney> Fix buglets and mixing dicts/scalars raises TypeError for now c5428d5b <Wes McKinney> Consolidate to a single ConvertPySequence entry point 79cd77e9 <Wes McKinney> Add short circuit option, some small refactoring
2018-08-09 13:31:10 -04:00
ARROW-9992: [C++][Python] Refactor python to arrow conversions based on a reusable conversion API ### Targets of the refactoring: - PythonToArrow converters based on a common API - PyBytesView to use `Result` return values and contain `is_utf8` flag - PyConversionOptions is now available from all converters so we can honor its flags ### Fixes - ARROW-9993 [Python] Tzinfo - string roundtrip fails on pytz.StaticTzInfo objects - ARROW-9994 [C++][Python] Auto chunking nested array containing binary-like fields result malformed output - ARROW-9996 [C++] Dictionary is unset when calling DictionaryArray.GetScalar for null values - ~ARROW-9997 [Python] StructScalar.as_py() fails if the type has duplicate field names~ - ARROW-9999 [Python] Support constructing dictionary array directly through pa.array() - ARROW-10000 [C++][Python] Support constructing StructArray from list of key-value pairs - ARROW-9593 [Python] Add custom pickle reducers for DictionaryScalar - ARROW-6281 [Python] Produce chunked arrays for nested types in pyarrow.array - ARROW-2367 [Python] ListArray has trouble with sizes greater than kMaximumCapacity - ARROW-9976: [Python] ArrowCapacityError when doing Table.from_pandas with large dataframe ### Backward incompatibility ~~Since a struct type can contain duplicated field names we cannot return a struct scalar as a mapping, so I had to change the `.as_py()` representation to return with a list of key-value pairs.~~ ### TODOs: - [x] ensure that the large memory tests are passing - [x] benchmark and check binary size again ### Library size Before: ``` 12M Sep 25 15:05 libarrow.200.0.0.dylib 2.7M Sep 25 15:07 libarrow_python.200.0.0.dylib ``` After: ``` 12M Sep 25 15:46 libarrow.200.0.0.dylib 2.1M Sep 25 15:50 libarrow_python.200.0.0.dylib ``` ### Benchmarks Executed the following ASV benchmark: ```bash asv continuous --bench convert_builtins master py2ar --no-only-changed --split ``` After some optimization: ``` Benchmarks that have improved: before after ratio [f358a29b] [18d1c052] <master> <py2ar> - 2.78±0.03ms 2.45±0.03ms 0.88 convert_builtins.ConvertPyListToArray.time_convert('bool') - 3.59±0.01ms 3.12±0.02ms 0.87 convert_builtins.ConvertPyListToArray.time_convert('int32') - 3.37±0.01ms 2.73±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('uint32') - 3.74±0.02ms 3.03±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('int64') - 3.38±0.01ms 2.69±0.01ms 0.80 convert_builtins.ConvertPyListToArray.time_convert('uint64') - 2.83±0.01ms 2.24±0.01ms 0.79 convert_builtins.ConvertPyListToArray.time_convert('float32') - 3.92±0.02ms 2.99±0.02ms 0.76 convert_builtins.ConvertPyListToArray.time_convert('binary10') - 14.1±0.04ms 8.89±0.05ms 0.63 convert_builtins.ConvertPyListToArray.time_convert('unicode') - 5.60±0.01ms 3.24±0.03ms 0.58 convert_builtins.ConvertPyListToArray.time_convert('ascii') - 5.37±0.02ms 2.91±0.04ms 0.54 convert_builtins.ConvertPyListToArray.time_convert('binary') Benchmarks that have stayed the same: before after ratio [f358a29b] [18d1c052] <master> <py2ar> 14.8±0.02ms 15.5±0.1ms 1.05 convert_builtins.ConvertPyListToArray.time_convert('decimal') 16.4±0.7ms 15.1±0.6ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('struct from tuples') 34.4±0.3ms 31.5±0.4ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('int64 list') 16.7±0.7ms 15.1±0.6ms ~0.91 convert_builtins.ConvertPyListToArray.time_convert('struct') 2.42±0.02ms 2.05±0.03ms ~0.85 convert_builtins.ConvertPyListToArray.time_convert('float64') ``` Closes #8088 from kszucs/py2ar Authored-by: Krisztián Szűcs <szucs.krisztian@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2020-09-25 20:49:16 -04:00
if chunked.get().num_chunks() == 1:
return pyarrow_wrap_array(chunked.get().chunk(0))
ARROW-2814: [Python] Unify conversion paths for sequences of Python objects Key points * All object sequences, including NumPy arrays of objects are being converted in builtin_convert.cc * pyarrow.array can now yield chunked output from normal Python input. Before, we could overflow a BinaryBuilder with no recourse * Eliminated virtual calls from the inner hot path * Eliminated some code duplication in builtin_convert.cc * Special-cased mask handling, so masks (`mask=...` in `pyarrow.array`) also work with plain Python sequence now instead of only NumPy arrays * Centralized null checking to a single code path, with a compile-time switch between pandas-style and non-pandas null-checking Some issues I ran into: * We have tests that make the somewhat heavy-handed promotion of small NumPy scalars to int64 or uint64. I have added more rigid "type unification" for dtypes, so that now a sequence of int8 scalars will yield int8 result * We were implicitly casting integers to double without checking whether the integers are representable as doubles. I think implicit casting is OK (e.g. `pa.array([1.5, 1, None])`) but we should validate that we can't discarding information There are some other problems that need fixing still / inconsistencies from the two code paths or follow-up issues. I have created a number of follow up JIRAs and added a number of new unit tests Author: Wes McKinney <wesm+git@apache.org> Closes #2366 from wesm/ARROW-2814 and squashes the following commits: 9d15551c <Wes McKinney> Address further code review comments a7a8c3ce <Wes McKinney> Check in new source files d7760cef <Wes McKinney> Address @pitrou code review comments 3f56c300 <Wes McKinney> Exclude python/iterators.h from C++/CLI lint checks d1687720 <Wes McKinney> Fix some more things df136064 <Wes McKinney> Miscellaneous micro-optimizations 07ff8094 <Wes McKinney> Bump versions in asv.conf.json 9efb097e <Wes McKinney> Add more unit tests, sand rough edges. Add boundschecking for integer coercion with float32 e0c9b9ce <Wes McKinney> Delete casting cruft a13bcaf1 <Wes McKinney> Fix rest of unit tests 2b3815f3 <Wes McKinney> Loose and string utf8 type conversions a04bcdc2 <Wes McKinney> Fix more unit tests, disallow non-boolean mask 688b8298 <Wes McKinney> Implement NumPy dtype unifier helper class. Some more cleanup d9d0822e <Wes McKinney> Add NumPy concrete type checking logic d3d97eaf <Wes McKinney> Fix NumPy float scalar casting issue f3b3e2f9 <Wes McKinney> Code fully compiles again e8e5964c <Wes McKinney> First pass cleaning up ListConverter 4424c62c <Wes McKinney> Remove comments c5ca7a42 <Wes McKinney> More refactoring, cleaning up old code. Add lambda version of VIsitTypeInline 1c714d35 <Wes McKinney> Delete some ConvertLists code b4fdea0c <Wes McKinney> Refactoring, add VisitSequenceMasked d75adaf2 <Wes McKinney> More refactoring 72de8a3d <Wes McKinney> Templatize more, less code duplication 72e6574e <Wes McKinney> Do not make virtual AppendSingle/AppendMultiple calls for non-nested SeqConverter 1c338204 <Wes McKinney> Move over NumPyConverter code, small refactorings. Now very broken 58db0964 <Wes McKinney> Fix buglets and mixing dicts/scalars raises TypeError for now c5428d5b <Wes McKinney> Consolidate to a single ConvertPySequence entry point 79cd77e9 <Wes McKinney> Add short circuit option, some small refactoring
2018-08-09 13:31:10 -04:00
else:
ARROW-9992: [C++][Python] Refactor python to arrow conversions based on a reusable conversion API ### Targets of the refactoring: - PythonToArrow converters based on a common API - PyBytesView to use `Result` return values and contain `is_utf8` flag - PyConversionOptions is now available from all converters so we can honor its flags ### Fixes - ARROW-9993 [Python] Tzinfo - string roundtrip fails on pytz.StaticTzInfo objects - ARROW-9994 [C++][Python] Auto chunking nested array containing binary-like fields result malformed output - ARROW-9996 [C++] Dictionary is unset when calling DictionaryArray.GetScalar for null values - ~ARROW-9997 [Python] StructScalar.as_py() fails if the type has duplicate field names~ - ARROW-9999 [Python] Support constructing dictionary array directly through pa.array() - ARROW-10000 [C++][Python] Support constructing StructArray from list of key-value pairs - ARROW-9593 [Python] Add custom pickle reducers for DictionaryScalar - ARROW-6281 [Python] Produce chunked arrays for nested types in pyarrow.array - ARROW-2367 [Python] ListArray has trouble with sizes greater than kMaximumCapacity - ARROW-9976: [Python] ArrowCapacityError when doing Table.from_pandas with large dataframe ### Backward incompatibility ~~Since a struct type can contain duplicated field names we cannot return a struct scalar as a mapping, so I had to change the `.as_py()` representation to return with a list of key-value pairs.~~ ### TODOs: - [x] ensure that the large memory tests are passing - [x] benchmark and check binary size again ### Library size Before: ``` 12M Sep 25 15:05 libarrow.200.0.0.dylib 2.7M Sep 25 15:07 libarrow_python.200.0.0.dylib ``` After: ``` 12M Sep 25 15:46 libarrow.200.0.0.dylib 2.1M Sep 25 15:50 libarrow_python.200.0.0.dylib ``` ### Benchmarks Executed the following ASV benchmark: ```bash asv continuous --bench convert_builtins master py2ar --no-only-changed --split ``` After some optimization: ``` Benchmarks that have improved: before after ratio [f358a29b] [18d1c052] <master> <py2ar> - 2.78±0.03ms 2.45±0.03ms 0.88 convert_builtins.ConvertPyListToArray.time_convert('bool') - 3.59±0.01ms 3.12±0.02ms 0.87 convert_builtins.ConvertPyListToArray.time_convert('int32') - 3.37±0.01ms 2.73±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('uint32') - 3.74±0.02ms 3.03±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('int64') - 3.38±0.01ms 2.69±0.01ms 0.80 convert_builtins.ConvertPyListToArray.time_convert('uint64') - 2.83±0.01ms 2.24±0.01ms 0.79 convert_builtins.ConvertPyListToArray.time_convert('float32') - 3.92±0.02ms 2.99±0.02ms 0.76 convert_builtins.ConvertPyListToArray.time_convert('binary10') - 14.1±0.04ms 8.89±0.05ms 0.63 convert_builtins.ConvertPyListToArray.time_convert('unicode') - 5.60±0.01ms 3.24±0.03ms 0.58 convert_builtins.ConvertPyListToArray.time_convert('ascii') - 5.37±0.02ms 2.91±0.04ms 0.54 convert_builtins.ConvertPyListToArray.time_convert('binary') Benchmarks that have stayed the same: before after ratio [f358a29b] [18d1c052] <master> <py2ar> 14.8±0.02ms 15.5±0.1ms 1.05 convert_builtins.ConvertPyListToArray.time_convert('decimal') 16.4±0.7ms 15.1±0.6ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('struct from tuples') 34.4±0.3ms 31.5±0.4ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('int64 list') 16.7±0.7ms 15.1±0.6ms ~0.91 convert_builtins.ConvertPyListToArray.time_convert('struct') 2.42±0.02ms 2.05±0.03ms ~0.85 convert_builtins.ConvertPyListToArray.time_convert('float64') ``` Closes #8088 from kszucs/py2ar Authored-by: Krisztián Szűcs <szucs.krisztian@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2020-09-25 20:49:16 -04:00
return pyarrow_wrap_chunked_array(chunked)
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
cdef inline _is_array_like(obj):
if np is None:
return False
if isinstance(obj, np.ndarray):
return True
return pandas_api._have_pandas_internal() and pandas_api.is_array_like(obj)
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
def _ndarray_to_arrow_type(object values, DataType type):
return pyarrow_wrap_data_type(_ndarray_to_type(values, type))
cdef shared_ptr[CDataType] _ndarray_to_type(object values,
DataType type) except *:
cdef shared_ptr[CDataType] c_type
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
dtype = values.dtype
if type is None and dtype != object:
c_type = GetResultValue(NumPyDtypeToArrow(dtype))
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
if type is not None:
c_type = type.sp_type
return c_type
cdef _ndarray_to_array(object values, object mask, DataType type,
c_bool from_pandas, c_bool safe, CMemoryPool* pool):
cdef:
shared_ptr[CChunkedArray] chunked_out
shared_ptr[CDataType] c_type = _ndarray_to_type(values, type)
CCastOptions cast_options = CCastOptions(safe)
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
with nogil:
ARROW-2814: [Python] Unify conversion paths for sequences of Python objects Key points * All object sequences, including NumPy arrays of objects are being converted in builtin_convert.cc * pyarrow.array can now yield chunked output from normal Python input. Before, we could overflow a BinaryBuilder with no recourse * Eliminated virtual calls from the inner hot path * Eliminated some code duplication in builtin_convert.cc * Special-cased mask handling, so masks (`mask=...` in `pyarrow.array`) also work with plain Python sequence now instead of only NumPy arrays * Centralized null checking to a single code path, with a compile-time switch between pandas-style and non-pandas null-checking Some issues I ran into: * We have tests that make the somewhat heavy-handed promotion of small NumPy scalars to int64 or uint64. I have added more rigid "type unification" for dtypes, so that now a sequence of int8 scalars will yield int8 result * We were implicitly casting integers to double without checking whether the integers are representable as doubles. I think implicit casting is OK (e.g. `pa.array([1.5, 1, None])`) but we should validate that we can't discarding information There are some other problems that need fixing still / inconsistencies from the two code paths or follow-up issues. I have created a number of follow up JIRAs and added a number of new unit tests Author: Wes McKinney <wesm+git@apache.org> Closes #2366 from wesm/ARROW-2814 and squashes the following commits: 9d15551c <Wes McKinney> Address further code review comments a7a8c3ce <Wes McKinney> Check in new source files d7760cef <Wes McKinney> Address @pitrou code review comments 3f56c300 <Wes McKinney> Exclude python/iterators.h from C++/CLI lint checks d1687720 <Wes McKinney> Fix some more things df136064 <Wes McKinney> Miscellaneous micro-optimizations 07ff8094 <Wes McKinney> Bump versions in asv.conf.json 9efb097e <Wes McKinney> Add more unit tests, sand rough edges. Add boundschecking for integer coercion with float32 e0c9b9ce <Wes McKinney> Delete casting cruft a13bcaf1 <Wes McKinney> Fix rest of unit tests 2b3815f3 <Wes McKinney> Loose and string utf8 type conversions a04bcdc2 <Wes McKinney> Fix more unit tests, disallow non-boolean mask 688b8298 <Wes McKinney> Implement NumPy dtype unifier helper class. Some more cleanup d9d0822e <Wes McKinney> Add NumPy concrete type checking logic d3d97eaf <Wes McKinney> Fix NumPy float scalar casting issue f3b3e2f9 <Wes McKinney> Code fully compiles again e8e5964c <Wes McKinney> First pass cleaning up ListConverter 4424c62c <Wes McKinney> Remove comments c5ca7a42 <Wes McKinney> More refactoring, cleaning up old code. Add lambda version of VIsitTypeInline 1c714d35 <Wes McKinney> Delete some ConvertLists code b4fdea0c <Wes McKinney> Refactoring, add VisitSequenceMasked d75adaf2 <Wes McKinney> More refactoring 72de8a3d <Wes McKinney> Templatize more, less code duplication 72e6574e <Wes McKinney> Do not make virtual AppendSingle/AppendMultiple calls for non-nested SeqConverter 1c338204 <Wes McKinney> Move over NumPyConverter code, small refactorings. Now very broken 58db0964 <Wes McKinney> Fix buglets and mixing dicts/scalars raises TypeError for now c5428d5b <Wes McKinney> Consolidate to a single ConvertPySequence entry point 79cd77e9 <Wes McKinney> Add short circuit option, some small refactoring
2018-08-09 13:31:10 -04:00
check_status(NdarrayToArrow(pool, values, mask, from_pandas,
c_type, cast_options, &chunked_out))
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
if chunked_out.get().num_chunks() > 1:
return pyarrow_wrap_chunked_array(chunked_out)
else:
return pyarrow_wrap_array(chunked_out.get().chunk(0))
cdef _codes_to_indices(object codes, object mask, DataType type,
MemoryPool memory_pool):
"""
Convert the codes of a pandas Categorical to indices for a pyarrow
DictionaryArray, taking into account missing values + mask
"""
if mask is None:
mask = codes == -1
else:
mask = mask | (codes == -1)
return array(codes, mask=mask, type=type, memory_pool=memory_pool)
def _handle_arrow_array_protocol(obj, type, mask, size):
if mask is not None or size is not None:
raise ValueError(
"Cannot specify a mask or a size when passing an object that is "
"converted with the __arrow_array__ protocol.")
res = obj.__arrow_array__(type=type)
if not isinstance(res, (Array, ChunkedArray)):
raise TypeError("The object's __arrow_array__ method does not "
"return a pyarrow Array or ChunkedArray.")
if isinstance(res, ChunkedArray) and res.num_chunks==1:
res = res.chunk(0)
GH-43683: [Python] Use pandas StringDtype when enabled (pandas 3+) (#44195) ### Rationale for this change With pandas' [PDEP-14](https://pandas.pydata.org/pdeps/0014-string-dtype.html) proposal, pandas is planning to introduce a default string dtype in pandas 3.0 (instead of the current object dtype). This will become the default in pandas 3.0, and can be enabled with an option in the upcoming pandas 2.3 (`pd.options.future.infer_string = True`). To prepare for that, we should start using that string dtype in `to_pandas()` conversions when that option is enabled. ### What changes are included in this PR? - If pandas >= 3.0 is used or the pandas option is enabled, ensure that `to_pandas()` calls use the default string dtype of pandas for string-like columns (string, large_string, string_view) ### Are these changes tested? It is tested in the pandas-nightly crossbow build. There is still one failure that is because of a bug on the pandas side (https://github.com/pandas-dev/pandas/issues/59879) ### Are there any user-facing changes? **This PR includes breaking changes to public APIs.** Depending on the version of pandas, `to_pandas()` will change to use pandas' string dtype instead of object dtype. This is a breaking user-facing change, but essentially just following the equivalent change in default dtype on the pandas side. * GitHub Issue: #43683 Lead-authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Co-authored-by: Raúl Cumplido <raulcumplido@gmail.com> Signed-off-by: Joris Van den Bossche <jorisvandenbossche@gmail.com>
2025-01-09 20:22:01 +01:00
if type is not None and res.type != type:
res = res.cast(type)
return res
ARROW-4324: [Python] Triage broken type inference logic in presence of a mix of NumPy dtype-having objects and other scalar values In investigating the innocuous bug report from ARROW-4324 I stumbled on a pile of hacks and flawed design around type inference ``` test_list = [np.dtype('int32').type(10), np.dtype('float32').type(0.5)] test_array = pa.array(test_list) # Expected # test_array # <pyarrow.lib.DoubleArray object at 0x7f009963bf48> # [ # 10, # 0.5 # ] # Got # test_array # <pyarrow.lib.Int32Array object at 0x7f009963bf48> # [ # 10, # 0 # ] ``` It turns out there are several issues: * There was a kludge around handling the `numpy.nan` value which is a PyFloat, not a NumPy float64 scalar * Type inference assumed "NaN is null", which should not be hard coded, so I added a flag to switch between pandas semantics and non-pandas * Mixing NumPy scalar values and non-NumPy scalars (like our evil friend numpy.nan) caused the output type to be simply incorrect. For example `[np.float16(1.5), 2.5]` would yield `pa.float16()` output type. Yuck In inserted some hacks to force what I believe to be the correct behavior and fixed a couple unit tests that actually exhibited buggy behavior before (see within). I don't have time to do the "right thing" right now which is to more or less rewrite the hot path of `arrow/python/inference.cc`, so at least this gets the unit tests asserting what is correct so that refactoring will be more productive later. Author: Wes McKinney <wesm+git@apache.org> Closes #4527 from wesm/ARROW-4324 and squashes the following commits: e396958b0 <Wes McKinney> Add unit test for passing pandas Series with from_pandas=False 754468a5d <Wes McKinney> Set from_pandas to None by default in pyarrow.array so that user wishes can be respected e1b839339 <Wes McKinney> Remove outdated unit test, add Python unit test that shows behavior from ARROW-2240 that's been changed 4bc8c8193 <Wes McKinney> Triage type inference logic in presence of a mix of NumPy dtype-having objects and other typed values, pending more serious refactor in ARROW-5564
2019-06-12 17:14:40 -05:00
def array(object obj, type=None, mask=None, size=None, from_pandas=None,
bint safe=True, MemoryPool memory_pool=None):
"""
Create pyarrow.Array instance from a Python object.
Parameters
----------
obj : sequence, iterable, ndarray, pandas.Series, Arrow-compatible array
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
If both type and size are specified may be a single use iterable. If
not strongly-typed, Arrow type will be inferred for resulting array.
Any Arrow-compatible array that implements the Arrow PyCapsule Protocol
(has an ``__arrow_c_array__`` or ``__arrow_c_device_array__`` method)
can be passed as well.
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
type : pyarrow.DataType
Explicit type to attempt to coerce to, otherwise will be inferred from
the data.
mask : array[bool], optional
Indicate which values are null (True) or not null (False).
ARROW-834: Python Support creating from iterables Support creating arrow arrays from iterables. Possible follow up TODO (or possibly belongs in this issue); throw a clear exception when passed an iterator rather than an iterable. Author: Holden Karau <holden@us.ibm.com> Closes #602 from holdenk/ARROW-834-csupport-creating-from-iterables and squashes the following commits: 750e7f4c [Holden Karau] Switch AppendItem to pure virtual for TypedConverterVisitor 0b72e956 [Holden Karau] Remove unecessary file after merge 2ed00d91 [Holden Karau] Fix long line ee2afaa4 [Holden Karau] Comment the built in converter type inferance code a bit. dddf57db [Holden Karau] Make a note about the resize/realloc in underflow with size 1fd9588a [Holden Karau] Do dynamic resize on the array buffer if size ended up being larger (e.g. support underflow from iterator constructors). ad935e9d [Holden Karau] Have size override the size of the iterator if the iterator is larger. 42f06996 [Holden Karau] Style fix fa0abcc2 [Holden Karau] Add ConvertPySequence to other side 01e462c2 [Holden Karau] Naive merge, lets see if it works 9eb3f106 [Holden Karau] Return the append inside of the decimal convert case/switch business a571ad4b [Holden Karau] Merge in changes to timestamp/datetime builtin converter 8c42fdc2 [Holden Karau] Feedback from wes (fix some previously unchecked appends, fix long line ) 389976cb [Holden Karau] Use CRTP in the iterator 52b03e3e [Holden Karau] Use a const ownedref 1d970bdb [Holden Karau] Switch the SeqVisitor to use OwnedRef c429f9a5 [Holden Karau] Style fixes d392daa8 [Holden Karau] Add limmited pure iterator support and a note be58bc0f [Holden Karau] Restore ArrowBlock (unreleated change) 3a55e824 [Holden Karau] Update array function description 80cc971e [Holden Karau] Cleanup debugging 63c0b7fa [Holden Karau] Tests pass (TODO cleanup debugging) 82ec3c3d [Holden Karau] revert changes to _array.pyx ca0d5303 [Holden Karau] In theory this works ok now for iterables as well b6c72f5c [Holden Karau] Make TypedConverterVisitor work on PySequence or Python Iterators 48b08aa5 [Holden Karau] Switch remaining converters a1bf4bd1 [Holden Karau] Move over timestamp and byte converters 15cdfe34 [Holden Karau] Move more of the convertors to the visitor version 76e08ca5 [Holden Karau] Part of the way along adding iterable support 77c935b9 [Holden Karau] Revert accidently java change 5c0fa0b5 [Holden Karau] Start adding iterable support
2017-06-28 11:35:45 -04:00
size : int64, optional
Size of the elements. If the input is larger than size bail at this
ARROW-834: Python Support creating from iterables Support creating arrow arrays from iterables. Possible follow up TODO (or possibly belongs in this issue); throw a clear exception when passed an iterator rather than an iterable. Author: Holden Karau <holden@us.ibm.com> Closes #602 from holdenk/ARROW-834-csupport-creating-from-iterables and squashes the following commits: 750e7f4c [Holden Karau] Switch AppendItem to pure virtual for TypedConverterVisitor 0b72e956 [Holden Karau] Remove unecessary file after merge 2ed00d91 [Holden Karau] Fix long line ee2afaa4 [Holden Karau] Comment the built in converter type inferance code a bit. dddf57db [Holden Karau] Make a note about the resize/realloc in underflow with size 1fd9588a [Holden Karau] Do dynamic resize on the array buffer if size ended up being larger (e.g. support underflow from iterator constructors). ad935e9d [Holden Karau] Have size override the size of the iterator if the iterator is larger. 42f06996 [Holden Karau] Style fix fa0abcc2 [Holden Karau] Add ConvertPySequence to other side 01e462c2 [Holden Karau] Naive merge, lets see if it works 9eb3f106 [Holden Karau] Return the append inside of the decimal convert case/switch business a571ad4b [Holden Karau] Merge in changes to timestamp/datetime builtin converter 8c42fdc2 [Holden Karau] Feedback from wes (fix some previously unchecked appends, fix long line ) 389976cb [Holden Karau] Use CRTP in the iterator 52b03e3e [Holden Karau] Use a const ownedref 1d970bdb [Holden Karau] Switch the SeqVisitor to use OwnedRef c429f9a5 [Holden Karau] Style fixes d392daa8 [Holden Karau] Add limmited pure iterator support and a note be58bc0f [Holden Karau] Restore ArrowBlock (unreleated change) 3a55e824 [Holden Karau] Update array function description 80cc971e [Holden Karau] Cleanup debugging 63c0b7fa [Holden Karau] Tests pass (TODO cleanup debugging) 82ec3c3d [Holden Karau] revert changes to _array.pyx ca0d5303 [Holden Karau] In theory this works ok now for iterables as well b6c72f5c [Holden Karau] Make TypedConverterVisitor work on PySequence or Python Iterators 48b08aa5 [Holden Karau] Switch remaining converters a1bf4bd1 [Holden Karau] Move over timestamp and byte converters 15cdfe34 [Holden Karau] Move more of the convertors to the visitor version 76e08ca5 [Holden Karau] Part of the way along adding iterable support 77c935b9 [Holden Karau] Revert accidently java change 5c0fa0b5 [Holden Karau] Start adding iterable support
2017-06-28 11:35:45 -04:00
length. For iterators, if size is larger than the input iterator this
will be treated as a "max size", but will involve an initial allocation
of size followed by a resize to the actual size (so if you know the
exact size specifying it correctly will give you better performance).
from_pandas : bool, default None
ARROW-4324: [Python] Triage broken type inference logic in presence of a mix of NumPy dtype-having objects and other scalar values In investigating the innocuous bug report from ARROW-4324 I stumbled on a pile of hacks and flawed design around type inference ``` test_list = [np.dtype('int32').type(10), np.dtype('float32').type(0.5)] test_array = pa.array(test_list) # Expected # test_array # <pyarrow.lib.DoubleArray object at 0x7f009963bf48> # [ # 10, # 0.5 # ] # Got # test_array # <pyarrow.lib.Int32Array object at 0x7f009963bf48> # [ # 10, # 0 # ] ``` It turns out there are several issues: * There was a kludge around handling the `numpy.nan` value which is a PyFloat, not a NumPy float64 scalar * Type inference assumed "NaN is null", which should not be hard coded, so I added a flag to switch between pandas semantics and non-pandas * Mixing NumPy scalar values and non-NumPy scalars (like our evil friend numpy.nan) caused the output type to be simply incorrect. For example `[np.float16(1.5), 2.5]` would yield `pa.float16()` output type. Yuck In inserted some hacks to force what I believe to be the correct behavior and fixed a couple unit tests that actually exhibited buggy behavior before (see within). I don't have time to do the "right thing" right now which is to more or less rewrite the hot path of `arrow/python/inference.cc`, so at least this gets the unit tests asserting what is correct so that refactoring will be more productive later. Author: Wes McKinney <wesm+git@apache.org> Closes #4527 from wesm/ARROW-4324 and squashes the following commits: e396958b0 <Wes McKinney> Add unit test for passing pandas Series with from_pandas=False 754468a5d <Wes McKinney> Set from_pandas to None by default in pyarrow.array so that user wishes can be respected e1b839339 <Wes McKinney> Remove outdated unit test, add Python unit test that shows behavior from ARROW-2240 that's been changed 4bc8c8193 <Wes McKinney> Triage type inference logic in presence of a mix of NumPy dtype-having objects and other typed values, pending more serious refactor in ARROW-5564
2019-06-12 17:14:40 -05:00
Use pandas's semantics for inferring nulls from values in
ndarray-like data. If passed, the mask tasks precedence, but
ARROW-4324: [Python] Triage broken type inference logic in presence of a mix of NumPy dtype-having objects and other scalar values In investigating the innocuous bug report from ARROW-4324 I stumbled on a pile of hacks and flawed design around type inference ``` test_list = [np.dtype('int32').type(10), np.dtype('float32').type(0.5)] test_array = pa.array(test_list) # Expected # test_array # <pyarrow.lib.DoubleArray object at 0x7f009963bf48> # [ # 10, # 0.5 # ] # Got # test_array # <pyarrow.lib.Int32Array object at 0x7f009963bf48> # [ # 10, # 0 # ] ``` It turns out there are several issues: * There was a kludge around handling the `numpy.nan` value which is a PyFloat, not a NumPy float64 scalar * Type inference assumed "NaN is null", which should not be hard coded, so I added a flag to switch between pandas semantics and non-pandas * Mixing NumPy scalar values and non-NumPy scalars (like our evil friend numpy.nan) caused the output type to be simply incorrect. For example `[np.float16(1.5), 2.5]` would yield `pa.float16()` output type. Yuck In inserted some hacks to force what I believe to be the correct behavior and fixed a couple unit tests that actually exhibited buggy behavior before (see within). I don't have time to do the "right thing" right now which is to more or less rewrite the hot path of `arrow/python/inference.cc`, so at least this gets the unit tests asserting what is correct so that refactoring will be more productive later. Author: Wes McKinney <wesm+git@apache.org> Closes #4527 from wesm/ARROW-4324 and squashes the following commits: e396958b0 <Wes McKinney> Add unit test for passing pandas Series with from_pandas=False 754468a5d <Wes McKinney> Set from_pandas to None by default in pyarrow.array so that user wishes can be respected e1b839339 <Wes McKinney> Remove outdated unit test, add Python unit test that shows behavior from ARROW-2240 that's been changed 4bc8c8193 <Wes McKinney> Triage type inference logic in presence of a mix of NumPy dtype-having objects and other typed values, pending more serious refactor in ARROW-5564
2019-06-12 17:14:40 -05:00
if a value is unmasked (not-null), but still null according to
pandas semantics, then it is null. Defaults to False if not
passed explicitly by user, or True if a pandas object is
passed in.
safe : bool, default True
Check for overflows or other unsafe conversions.
memory_pool : pyarrow.MemoryPool, optional
If not passed, will allocate memory from the currently-set default
memory pool.
Returns
-------
array : pyarrow.Array or pyarrow.ChunkedArray
A ChunkedArray instead of an Array is returned if:
- the object data overflowed binary storage.
- the object's ``__arrow_array__`` protocol method returned a chunked
array.
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
Notes
-----
Timezone will be preserved in the returned array for timezone-aware data,
else no timezone will be returned for naive timestamps.
Internally, UTC values are stored for timezone-aware data with the
timezone set in the data type.
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
Pandas's DateOffsets and dateutil.relativedelta.relativedelta are by
default converted as MonthDayNanoIntervalArray. relativedelta leapdays
are ignored as are all absolute fields on both objects. datetime.timedelta
can also be converted to MonthDayNanoIntervalArray but this requires
passing MonthDayNanoIntervalType explicitly.
ARROW-9992: [C++][Python] Refactor python to arrow conversions based on a reusable conversion API ### Targets of the refactoring: - PythonToArrow converters based on a common API - PyBytesView to use `Result` return values and contain `is_utf8` flag - PyConversionOptions is now available from all converters so we can honor its flags ### Fixes - ARROW-9993 [Python] Tzinfo - string roundtrip fails on pytz.StaticTzInfo objects - ARROW-9994 [C++][Python] Auto chunking nested array containing binary-like fields result malformed output - ARROW-9996 [C++] Dictionary is unset when calling DictionaryArray.GetScalar for null values - ~ARROW-9997 [Python] StructScalar.as_py() fails if the type has duplicate field names~ - ARROW-9999 [Python] Support constructing dictionary array directly through pa.array() - ARROW-10000 [C++][Python] Support constructing StructArray from list of key-value pairs - ARROW-9593 [Python] Add custom pickle reducers for DictionaryScalar - ARROW-6281 [Python] Produce chunked arrays for nested types in pyarrow.array - ARROW-2367 [Python] ListArray has trouble with sizes greater than kMaximumCapacity - ARROW-9976: [Python] ArrowCapacityError when doing Table.from_pandas with large dataframe ### Backward incompatibility ~~Since a struct type can contain duplicated field names we cannot return a struct scalar as a mapping, so I had to change the `.as_py()` representation to return with a list of key-value pairs.~~ ### TODOs: - [x] ensure that the large memory tests are passing - [x] benchmark and check binary size again ### Library size Before: ``` 12M Sep 25 15:05 libarrow.200.0.0.dylib 2.7M Sep 25 15:07 libarrow_python.200.0.0.dylib ``` After: ``` 12M Sep 25 15:46 libarrow.200.0.0.dylib 2.1M Sep 25 15:50 libarrow_python.200.0.0.dylib ``` ### Benchmarks Executed the following ASV benchmark: ```bash asv continuous --bench convert_builtins master py2ar --no-only-changed --split ``` After some optimization: ``` Benchmarks that have improved: before after ratio [f358a29b] [18d1c052] <master> <py2ar> - 2.78±0.03ms 2.45±0.03ms 0.88 convert_builtins.ConvertPyListToArray.time_convert('bool') - 3.59±0.01ms 3.12±0.02ms 0.87 convert_builtins.ConvertPyListToArray.time_convert('int32') - 3.37±0.01ms 2.73±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('uint32') - 3.74±0.02ms 3.03±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('int64') - 3.38±0.01ms 2.69±0.01ms 0.80 convert_builtins.ConvertPyListToArray.time_convert('uint64') - 2.83±0.01ms 2.24±0.01ms 0.79 convert_builtins.ConvertPyListToArray.time_convert('float32') - 3.92±0.02ms 2.99±0.02ms 0.76 convert_builtins.ConvertPyListToArray.time_convert('binary10') - 14.1±0.04ms 8.89±0.05ms 0.63 convert_builtins.ConvertPyListToArray.time_convert('unicode') - 5.60±0.01ms 3.24±0.03ms 0.58 convert_builtins.ConvertPyListToArray.time_convert('ascii') - 5.37±0.02ms 2.91±0.04ms 0.54 convert_builtins.ConvertPyListToArray.time_convert('binary') Benchmarks that have stayed the same: before after ratio [f358a29b] [18d1c052] <master> <py2ar> 14.8±0.02ms 15.5±0.1ms 1.05 convert_builtins.ConvertPyListToArray.time_convert('decimal') 16.4±0.7ms 15.1±0.6ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('struct from tuples') 34.4±0.3ms 31.5±0.4ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('int64 list') 16.7±0.7ms 15.1±0.6ms ~0.91 convert_builtins.ConvertPyListToArray.time_convert('struct') 2.42±0.02ms 2.05±0.03ms ~0.85 convert_builtins.ConvertPyListToArray.time_convert('float64') ``` Closes #8088 from kszucs/py2ar Authored-by: Krisztián Szűcs <szucs.krisztian@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2020-09-25 20:49:16 -04:00
Converting to dictionary array will promote to a wider integer type for
indices if the number of distinct values cannot be represented, even if
the index type was explicitly set. This means that if there are more than
127 values the returned dictionary array's index type will be at least
pa.int16() even if pa.int8() was passed to the function. Note that an
explicit index type will not be demoted even if it is wider than required.
This class supports Python's standard operators
for element-wise operations, i.e. arithmetic (`+`, `-`, `/`, `%`, `**`),
bitwise (`&`, `|`, `^`, `>>`, `<<`) and others.
They can be used directly instead of calling underlying
`pyarrow.compute` functions explicitly.
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
Examples
--------
>>> import pandas as pd
>>> import pyarrow as pa
>>> pa.array(pd.Series([1, 2]))
<pyarrow.lib.Int64Array object at ...>
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
[
1,
2
]
ARROW-9992: [C++][Python] Refactor python to arrow conversions based on a reusable conversion API ### Targets of the refactoring: - PythonToArrow converters based on a common API - PyBytesView to use `Result` return values and contain `is_utf8` flag - PyConversionOptions is now available from all converters so we can honor its flags ### Fixes - ARROW-9993 [Python] Tzinfo - string roundtrip fails on pytz.StaticTzInfo objects - ARROW-9994 [C++][Python] Auto chunking nested array containing binary-like fields result malformed output - ARROW-9996 [C++] Dictionary is unset when calling DictionaryArray.GetScalar for null values - ~ARROW-9997 [Python] StructScalar.as_py() fails if the type has duplicate field names~ - ARROW-9999 [Python] Support constructing dictionary array directly through pa.array() - ARROW-10000 [C++][Python] Support constructing StructArray from list of key-value pairs - ARROW-9593 [Python] Add custom pickle reducers for DictionaryScalar - ARROW-6281 [Python] Produce chunked arrays for nested types in pyarrow.array - ARROW-2367 [Python] ListArray has trouble with sizes greater than kMaximumCapacity - ARROW-9976: [Python] ArrowCapacityError when doing Table.from_pandas with large dataframe ### Backward incompatibility ~~Since a struct type can contain duplicated field names we cannot return a struct scalar as a mapping, so I had to change the `.as_py()` representation to return with a list of key-value pairs.~~ ### TODOs: - [x] ensure that the large memory tests are passing - [x] benchmark and check binary size again ### Library size Before: ``` 12M Sep 25 15:05 libarrow.200.0.0.dylib 2.7M Sep 25 15:07 libarrow_python.200.0.0.dylib ``` After: ``` 12M Sep 25 15:46 libarrow.200.0.0.dylib 2.1M Sep 25 15:50 libarrow_python.200.0.0.dylib ``` ### Benchmarks Executed the following ASV benchmark: ```bash asv continuous --bench convert_builtins master py2ar --no-only-changed --split ``` After some optimization: ``` Benchmarks that have improved: before after ratio [f358a29b] [18d1c052] <master> <py2ar> - 2.78±0.03ms 2.45±0.03ms 0.88 convert_builtins.ConvertPyListToArray.time_convert('bool') - 3.59±0.01ms 3.12±0.02ms 0.87 convert_builtins.ConvertPyListToArray.time_convert('int32') - 3.37±0.01ms 2.73±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('uint32') - 3.74±0.02ms 3.03±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('int64') - 3.38±0.01ms 2.69±0.01ms 0.80 convert_builtins.ConvertPyListToArray.time_convert('uint64') - 2.83±0.01ms 2.24±0.01ms 0.79 convert_builtins.ConvertPyListToArray.time_convert('float32') - 3.92±0.02ms 2.99±0.02ms 0.76 convert_builtins.ConvertPyListToArray.time_convert('binary10') - 14.1±0.04ms 8.89±0.05ms 0.63 convert_builtins.ConvertPyListToArray.time_convert('unicode') - 5.60±0.01ms 3.24±0.03ms 0.58 convert_builtins.ConvertPyListToArray.time_convert('ascii') - 5.37±0.02ms 2.91±0.04ms 0.54 convert_builtins.ConvertPyListToArray.time_convert('binary') Benchmarks that have stayed the same: before after ratio [f358a29b] [18d1c052] <master> <py2ar> 14.8±0.02ms 15.5±0.1ms 1.05 convert_builtins.ConvertPyListToArray.time_convert('decimal') 16.4±0.7ms 15.1±0.6ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('struct from tuples') 34.4±0.3ms 31.5±0.4ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('int64 list') 16.7±0.7ms 15.1±0.6ms ~0.91 convert_builtins.ConvertPyListToArray.time_convert('struct') 2.42±0.02ms 2.05±0.03ms ~0.85 convert_builtins.ConvertPyListToArray.time_convert('float64') ``` Closes #8088 from kszucs/py2ar Authored-by: Krisztián Szűcs <szucs.krisztian@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2020-09-25 20:49:16 -04:00
>>> pa.array(["a", "b", "a"], type=pa.dictionary(pa.int8(), pa.string()))
<pyarrow.lib.DictionaryArray object at ...>
...
ARROW-9992: [C++][Python] Refactor python to arrow conversions based on a reusable conversion API ### Targets of the refactoring: - PythonToArrow converters based on a common API - PyBytesView to use `Result` return values and contain `is_utf8` flag - PyConversionOptions is now available from all converters so we can honor its flags ### Fixes - ARROW-9993 [Python] Tzinfo - string roundtrip fails on pytz.StaticTzInfo objects - ARROW-9994 [C++][Python] Auto chunking nested array containing binary-like fields result malformed output - ARROW-9996 [C++] Dictionary is unset when calling DictionaryArray.GetScalar for null values - ~ARROW-9997 [Python] StructScalar.as_py() fails if the type has duplicate field names~ - ARROW-9999 [Python] Support constructing dictionary array directly through pa.array() - ARROW-10000 [C++][Python] Support constructing StructArray from list of key-value pairs - ARROW-9593 [Python] Add custom pickle reducers for DictionaryScalar - ARROW-6281 [Python] Produce chunked arrays for nested types in pyarrow.array - ARROW-2367 [Python] ListArray has trouble with sizes greater than kMaximumCapacity - ARROW-9976: [Python] ArrowCapacityError when doing Table.from_pandas with large dataframe ### Backward incompatibility ~~Since a struct type can contain duplicated field names we cannot return a struct scalar as a mapping, so I had to change the `.as_py()` representation to return with a list of key-value pairs.~~ ### TODOs: - [x] ensure that the large memory tests are passing - [x] benchmark and check binary size again ### Library size Before: ``` 12M Sep 25 15:05 libarrow.200.0.0.dylib 2.7M Sep 25 15:07 libarrow_python.200.0.0.dylib ``` After: ``` 12M Sep 25 15:46 libarrow.200.0.0.dylib 2.1M Sep 25 15:50 libarrow_python.200.0.0.dylib ``` ### Benchmarks Executed the following ASV benchmark: ```bash asv continuous --bench convert_builtins master py2ar --no-only-changed --split ``` After some optimization: ``` Benchmarks that have improved: before after ratio [f358a29b] [18d1c052] <master> <py2ar> - 2.78±0.03ms 2.45±0.03ms 0.88 convert_builtins.ConvertPyListToArray.time_convert('bool') - 3.59±0.01ms 3.12±0.02ms 0.87 convert_builtins.ConvertPyListToArray.time_convert('int32') - 3.37±0.01ms 2.73±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('uint32') - 3.74±0.02ms 3.03±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('int64') - 3.38±0.01ms 2.69±0.01ms 0.80 convert_builtins.ConvertPyListToArray.time_convert('uint64') - 2.83±0.01ms 2.24±0.01ms 0.79 convert_builtins.ConvertPyListToArray.time_convert('float32') - 3.92±0.02ms 2.99±0.02ms 0.76 convert_builtins.ConvertPyListToArray.time_convert('binary10') - 14.1±0.04ms 8.89±0.05ms 0.63 convert_builtins.ConvertPyListToArray.time_convert('unicode') - 5.60±0.01ms 3.24±0.03ms 0.58 convert_builtins.ConvertPyListToArray.time_convert('ascii') - 5.37±0.02ms 2.91±0.04ms 0.54 convert_builtins.ConvertPyListToArray.time_convert('binary') Benchmarks that have stayed the same: before after ratio [f358a29b] [18d1c052] <master> <py2ar> 14.8±0.02ms 15.5±0.1ms 1.05 convert_builtins.ConvertPyListToArray.time_convert('decimal') 16.4±0.7ms 15.1±0.6ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('struct from tuples') 34.4±0.3ms 31.5±0.4ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('int64 list') 16.7±0.7ms 15.1±0.6ms ~0.91 convert_builtins.ConvertPyListToArray.time_convert('struct') 2.42±0.02ms 2.05±0.03ms ~0.85 convert_builtins.ConvertPyListToArray.time_convert('float64') ``` Closes #8088 from kszucs/py2ar Authored-by: Krisztián Szűcs <szucs.krisztian@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2020-09-25 20:49:16 -04:00
-- dictionary:
[
"a",
"b"
]
ARROW-9992: [C++][Python] Refactor python to arrow conversions based on a reusable conversion API ### Targets of the refactoring: - PythonToArrow converters based on a common API - PyBytesView to use `Result` return values and contain `is_utf8` flag - PyConversionOptions is now available from all converters so we can honor its flags ### Fixes - ARROW-9993 [Python] Tzinfo - string roundtrip fails on pytz.StaticTzInfo objects - ARROW-9994 [C++][Python] Auto chunking nested array containing binary-like fields result malformed output - ARROW-9996 [C++] Dictionary is unset when calling DictionaryArray.GetScalar for null values - ~ARROW-9997 [Python] StructScalar.as_py() fails if the type has duplicate field names~ - ARROW-9999 [Python] Support constructing dictionary array directly through pa.array() - ARROW-10000 [C++][Python] Support constructing StructArray from list of key-value pairs - ARROW-9593 [Python] Add custom pickle reducers for DictionaryScalar - ARROW-6281 [Python] Produce chunked arrays for nested types in pyarrow.array - ARROW-2367 [Python] ListArray has trouble with sizes greater than kMaximumCapacity - ARROW-9976: [Python] ArrowCapacityError when doing Table.from_pandas with large dataframe ### Backward incompatibility ~~Since a struct type can contain duplicated field names we cannot return a struct scalar as a mapping, so I had to change the `.as_py()` representation to return with a list of key-value pairs.~~ ### TODOs: - [x] ensure that the large memory tests are passing - [x] benchmark and check binary size again ### Library size Before: ``` 12M Sep 25 15:05 libarrow.200.0.0.dylib 2.7M Sep 25 15:07 libarrow_python.200.0.0.dylib ``` After: ``` 12M Sep 25 15:46 libarrow.200.0.0.dylib 2.1M Sep 25 15:50 libarrow_python.200.0.0.dylib ``` ### Benchmarks Executed the following ASV benchmark: ```bash asv continuous --bench convert_builtins master py2ar --no-only-changed --split ``` After some optimization: ``` Benchmarks that have improved: before after ratio [f358a29b] [18d1c052] <master> <py2ar> - 2.78±0.03ms 2.45±0.03ms 0.88 convert_builtins.ConvertPyListToArray.time_convert('bool') - 3.59±0.01ms 3.12±0.02ms 0.87 convert_builtins.ConvertPyListToArray.time_convert('int32') - 3.37±0.01ms 2.73±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('uint32') - 3.74±0.02ms 3.03±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('int64') - 3.38±0.01ms 2.69±0.01ms 0.80 convert_builtins.ConvertPyListToArray.time_convert('uint64') - 2.83±0.01ms 2.24±0.01ms 0.79 convert_builtins.ConvertPyListToArray.time_convert('float32') - 3.92±0.02ms 2.99±0.02ms 0.76 convert_builtins.ConvertPyListToArray.time_convert('binary10') - 14.1±0.04ms 8.89±0.05ms 0.63 convert_builtins.ConvertPyListToArray.time_convert('unicode') - 5.60±0.01ms 3.24±0.03ms 0.58 convert_builtins.ConvertPyListToArray.time_convert('ascii') - 5.37±0.02ms 2.91±0.04ms 0.54 convert_builtins.ConvertPyListToArray.time_convert('binary') Benchmarks that have stayed the same: before after ratio [f358a29b] [18d1c052] <master> <py2ar> 14.8±0.02ms 15.5±0.1ms 1.05 convert_builtins.ConvertPyListToArray.time_convert('decimal') 16.4±0.7ms 15.1±0.6ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('struct from tuples') 34.4±0.3ms 31.5±0.4ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('int64 list') 16.7±0.7ms 15.1±0.6ms ~0.91 convert_builtins.ConvertPyListToArray.time_convert('struct') 2.42±0.02ms 2.05±0.03ms ~0.85 convert_builtins.ConvertPyListToArray.time_convert('float64') ``` Closes #8088 from kszucs/py2ar Authored-by: Krisztián Szűcs <szucs.krisztian@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2020-09-25 20:49:16 -04:00
-- indices:
[
0,
1,
0
]
ARROW-9992: [C++][Python] Refactor python to arrow conversions based on a reusable conversion API ### Targets of the refactoring: - PythonToArrow converters based on a common API - PyBytesView to use `Result` return values and contain `is_utf8` flag - PyConversionOptions is now available from all converters so we can honor its flags ### Fixes - ARROW-9993 [Python] Tzinfo - string roundtrip fails on pytz.StaticTzInfo objects - ARROW-9994 [C++][Python] Auto chunking nested array containing binary-like fields result malformed output - ARROW-9996 [C++] Dictionary is unset when calling DictionaryArray.GetScalar for null values - ~ARROW-9997 [Python] StructScalar.as_py() fails if the type has duplicate field names~ - ARROW-9999 [Python] Support constructing dictionary array directly through pa.array() - ARROW-10000 [C++][Python] Support constructing StructArray from list of key-value pairs - ARROW-9593 [Python] Add custom pickle reducers for DictionaryScalar - ARROW-6281 [Python] Produce chunked arrays for nested types in pyarrow.array - ARROW-2367 [Python] ListArray has trouble with sizes greater than kMaximumCapacity - ARROW-9976: [Python] ArrowCapacityError when doing Table.from_pandas with large dataframe ### Backward incompatibility ~~Since a struct type can contain duplicated field names we cannot return a struct scalar as a mapping, so I had to change the `.as_py()` representation to return with a list of key-value pairs.~~ ### TODOs: - [x] ensure that the large memory tests are passing - [x] benchmark and check binary size again ### Library size Before: ``` 12M Sep 25 15:05 libarrow.200.0.0.dylib 2.7M Sep 25 15:07 libarrow_python.200.0.0.dylib ``` After: ``` 12M Sep 25 15:46 libarrow.200.0.0.dylib 2.1M Sep 25 15:50 libarrow_python.200.0.0.dylib ``` ### Benchmarks Executed the following ASV benchmark: ```bash asv continuous --bench convert_builtins master py2ar --no-only-changed --split ``` After some optimization: ``` Benchmarks that have improved: before after ratio [f358a29b] [18d1c052] <master> <py2ar> - 2.78±0.03ms 2.45±0.03ms 0.88 convert_builtins.ConvertPyListToArray.time_convert('bool') - 3.59±0.01ms 3.12±0.02ms 0.87 convert_builtins.ConvertPyListToArray.time_convert('int32') - 3.37±0.01ms 2.73±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('uint32') - 3.74±0.02ms 3.03±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('int64') - 3.38±0.01ms 2.69±0.01ms 0.80 convert_builtins.ConvertPyListToArray.time_convert('uint64') - 2.83±0.01ms 2.24±0.01ms 0.79 convert_builtins.ConvertPyListToArray.time_convert('float32') - 3.92±0.02ms 2.99±0.02ms 0.76 convert_builtins.ConvertPyListToArray.time_convert('binary10') - 14.1±0.04ms 8.89±0.05ms 0.63 convert_builtins.ConvertPyListToArray.time_convert('unicode') - 5.60±0.01ms 3.24±0.03ms 0.58 convert_builtins.ConvertPyListToArray.time_convert('ascii') - 5.37±0.02ms 2.91±0.04ms 0.54 convert_builtins.ConvertPyListToArray.time_convert('binary') Benchmarks that have stayed the same: before after ratio [f358a29b] [18d1c052] <master> <py2ar> 14.8±0.02ms 15.5±0.1ms 1.05 convert_builtins.ConvertPyListToArray.time_convert('decimal') 16.4±0.7ms 15.1±0.6ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('struct from tuples') 34.4±0.3ms 31.5±0.4ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('int64 list') 16.7±0.7ms 15.1±0.6ms ~0.91 convert_builtins.ConvertPyListToArray.time_convert('struct') 2.42±0.02ms 2.05±0.03ms ~0.85 convert_builtins.ConvertPyListToArray.time_convert('float64') ``` Closes #8088 from kszucs/py2ar Authored-by: Krisztián Szűcs <szucs.krisztian@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2020-09-25 20:49:16 -04:00
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
>>> import numpy as np
ARROW-9992: [C++][Python] Refactor python to arrow conversions based on a reusable conversion API ### Targets of the refactoring: - PythonToArrow converters based on a common API - PyBytesView to use `Result` return values and contain `is_utf8` flag - PyConversionOptions is now available from all converters so we can honor its flags ### Fixes - ARROW-9993 [Python] Tzinfo - string roundtrip fails on pytz.StaticTzInfo objects - ARROW-9994 [C++][Python] Auto chunking nested array containing binary-like fields result malformed output - ARROW-9996 [C++] Dictionary is unset when calling DictionaryArray.GetScalar for null values - ~ARROW-9997 [Python] StructScalar.as_py() fails if the type has duplicate field names~ - ARROW-9999 [Python] Support constructing dictionary array directly through pa.array() - ARROW-10000 [C++][Python] Support constructing StructArray from list of key-value pairs - ARROW-9593 [Python] Add custom pickle reducers for DictionaryScalar - ARROW-6281 [Python] Produce chunked arrays for nested types in pyarrow.array - ARROW-2367 [Python] ListArray has trouble with sizes greater than kMaximumCapacity - ARROW-9976: [Python] ArrowCapacityError when doing Table.from_pandas with large dataframe ### Backward incompatibility ~~Since a struct type can contain duplicated field names we cannot return a struct scalar as a mapping, so I had to change the `.as_py()` representation to return with a list of key-value pairs.~~ ### TODOs: - [x] ensure that the large memory tests are passing - [x] benchmark and check binary size again ### Library size Before: ``` 12M Sep 25 15:05 libarrow.200.0.0.dylib 2.7M Sep 25 15:07 libarrow_python.200.0.0.dylib ``` After: ``` 12M Sep 25 15:46 libarrow.200.0.0.dylib 2.1M Sep 25 15:50 libarrow_python.200.0.0.dylib ``` ### Benchmarks Executed the following ASV benchmark: ```bash asv continuous --bench convert_builtins master py2ar --no-only-changed --split ``` After some optimization: ``` Benchmarks that have improved: before after ratio [f358a29b] [18d1c052] <master> <py2ar> - 2.78±0.03ms 2.45±0.03ms 0.88 convert_builtins.ConvertPyListToArray.time_convert('bool') - 3.59±0.01ms 3.12±0.02ms 0.87 convert_builtins.ConvertPyListToArray.time_convert('int32') - 3.37±0.01ms 2.73±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('uint32') - 3.74±0.02ms 3.03±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('int64') - 3.38±0.01ms 2.69±0.01ms 0.80 convert_builtins.ConvertPyListToArray.time_convert('uint64') - 2.83±0.01ms 2.24±0.01ms 0.79 convert_builtins.ConvertPyListToArray.time_convert('float32') - 3.92±0.02ms 2.99±0.02ms 0.76 convert_builtins.ConvertPyListToArray.time_convert('binary10') - 14.1±0.04ms 8.89±0.05ms 0.63 convert_builtins.ConvertPyListToArray.time_convert('unicode') - 5.60±0.01ms 3.24±0.03ms 0.58 convert_builtins.ConvertPyListToArray.time_convert('ascii') - 5.37±0.02ms 2.91±0.04ms 0.54 convert_builtins.ConvertPyListToArray.time_convert('binary') Benchmarks that have stayed the same: before after ratio [f358a29b] [18d1c052] <master> <py2ar> 14.8±0.02ms 15.5±0.1ms 1.05 convert_builtins.ConvertPyListToArray.time_convert('decimal') 16.4±0.7ms 15.1±0.6ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('struct from tuples') 34.4±0.3ms 31.5±0.4ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('int64 list') 16.7±0.7ms 15.1±0.6ms ~0.91 convert_builtins.ConvertPyListToArray.time_convert('struct') 2.42±0.02ms 2.05±0.03ms ~0.85 convert_builtins.ConvertPyListToArray.time_convert('float64') ``` Closes #8088 from kszucs/py2ar Authored-by: Krisztián Szűcs <szucs.krisztian@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2020-09-25 20:49:16 -04:00
>>> pa.array(pd.Series([1, 2]), mask=np.array([0, 1], dtype=bool))
<pyarrow.lib.Int64Array object at ...>
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
[
1,
null
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
]
ARROW-9992: [C++][Python] Refactor python to arrow conversions based on a reusable conversion API ### Targets of the refactoring: - PythonToArrow converters based on a common API - PyBytesView to use `Result` return values and contain `is_utf8` flag - PyConversionOptions is now available from all converters so we can honor its flags ### Fixes - ARROW-9993 [Python] Tzinfo - string roundtrip fails on pytz.StaticTzInfo objects - ARROW-9994 [C++][Python] Auto chunking nested array containing binary-like fields result malformed output - ARROW-9996 [C++] Dictionary is unset when calling DictionaryArray.GetScalar for null values - ~ARROW-9997 [Python] StructScalar.as_py() fails if the type has duplicate field names~ - ARROW-9999 [Python] Support constructing dictionary array directly through pa.array() - ARROW-10000 [C++][Python] Support constructing StructArray from list of key-value pairs - ARROW-9593 [Python] Add custom pickle reducers for DictionaryScalar - ARROW-6281 [Python] Produce chunked arrays for nested types in pyarrow.array - ARROW-2367 [Python] ListArray has trouble with sizes greater than kMaximumCapacity - ARROW-9976: [Python] ArrowCapacityError when doing Table.from_pandas with large dataframe ### Backward incompatibility ~~Since a struct type can contain duplicated field names we cannot return a struct scalar as a mapping, so I had to change the `.as_py()` representation to return with a list of key-value pairs.~~ ### TODOs: - [x] ensure that the large memory tests are passing - [x] benchmark and check binary size again ### Library size Before: ``` 12M Sep 25 15:05 libarrow.200.0.0.dylib 2.7M Sep 25 15:07 libarrow_python.200.0.0.dylib ``` After: ``` 12M Sep 25 15:46 libarrow.200.0.0.dylib 2.1M Sep 25 15:50 libarrow_python.200.0.0.dylib ``` ### Benchmarks Executed the following ASV benchmark: ```bash asv continuous --bench convert_builtins master py2ar --no-only-changed --split ``` After some optimization: ``` Benchmarks that have improved: before after ratio [f358a29b] [18d1c052] <master> <py2ar> - 2.78±0.03ms 2.45±0.03ms 0.88 convert_builtins.ConvertPyListToArray.time_convert('bool') - 3.59±0.01ms 3.12±0.02ms 0.87 convert_builtins.ConvertPyListToArray.time_convert('int32') - 3.37±0.01ms 2.73±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('uint32') - 3.74±0.02ms 3.03±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('int64') - 3.38±0.01ms 2.69±0.01ms 0.80 convert_builtins.ConvertPyListToArray.time_convert('uint64') - 2.83±0.01ms 2.24±0.01ms 0.79 convert_builtins.ConvertPyListToArray.time_convert('float32') - 3.92±0.02ms 2.99±0.02ms 0.76 convert_builtins.ConvertPyListToArray.time_convert('binary10') - 14.1±0.04ms 8.89±0.05ms 0.63 convert_builtins.ConvertPyListToArray.time_convert('unicode') - 5.60±0.01ms 3.24±0.03ms 0.58 convert_builtins.ConvertPyListToArray.time_convert('ascii') - 5.37±0.02ms 2.91±0.04ms 0.54 convert_builtins.ConvertPyListToArray.time_convert('binary') Benchmarks that have stayed the same: before after ratio [f358a29b] [18d1c052] <master> <py2ar> 14.8±0.02ms 15.5±0.1ms 1.05 convert_builtins.ConvertPyListToArray.time_convert('decimal') 16.4±0.7ms 15.1±0.6ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('struct from tuples') 34.4±0.3ms 31.5±0.4ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('int64 list') 16.7±0.7ms 15.1±0.6ms ~0.91 convert_builtins.ConvertPyListToArray.time_convert('struct') 2.42±0.02ms 2.05±0.03ms ~0.85 convert_builtins.ConvertPyListToArray.time_convert('float64') ``` Closes #8088 from kszucs/py2ar Authored-by: Krisztián Szűcs <szucs.krisztian@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2020-09-25 20:49:16 -04:00
>>> arr = pa.array(range(1024), type=pa.dictionary(pa.int8(), pa.int64()))
>>> arr.type.index_type
DataType(int16)
>>> arr1 = pa.array([1, 2, 3], type=pa.int8())
>>> arr2 = pa.array([4, 5, 6], type=pa.int8())
>>> arr1 + arr2
<pyarrow.lib.Int8Array object at ...>
[
5,
7,
9
]
>>> val = pa.scalar(42)
>>> val - arr1
<pyarrow.lib.Int64Array object at ...>
[
41,
40,
39
]
"""
ARROW-4324: [Python] Triage broken type inference logic in presence of a mix of NumPy dtype-having objects and other scalar values In investigating the innocuous bug report from ARROW-4324 I stumbled on a pile of hacks and flawed design around type inference ``` test_list = [np.dtype('int32').type(10), np.dtype('float32').type(0.5)] test_array = pa.array(test_list) # Expected # test_array # <pyarrow.lib.DoubleArray object at 0x7f009963bf48> # [ # 10, # 0.5 # ] # Got # test_array # <pyarrow.lib.Int32Array object at 0x7f009963bf48> # [ # 10, # 0 # ] ``` It turns out there are several issues: * There was a kludge around handling the `numpy.nan` value which is a PyFloat, not a NumPy float64 scalar * Type inference assumed "NaN is null", which should not be hard coded, so I added a flag to switch between pandas semantics and non-pandas * Mixing NumPy scalar values and non-NumPy scalars (like our evil friend numpy.nan) caused the output type to be simply incorrect. For example `[np.float16(1.5), 2.5]` would yield `pa.float16()` output type. Yuck In inserted some hacks to force what I believe to be the correct behavior and fixed a couple unit tests that actually exhibited buggy behavior before (see within). I don't have time to do the "right thing" right now which is to more or less rewrite the hot path of `arrow/python/inference.cc`, so at least this gets the unit tests asserting what is correct so that refactoring will be more productive later. Author: Wes McKinney <wesm+git@apache.org> Closes #4527 from wesm/ARROW-4324 and squashes the following commits: e396958b0 <Wes McKinney> Add unit test for passing pandas Series with from_pandas=False 754468a5d <Wes McKinney> Set from_pandas to None by default in pyarrow.array so that user wishes can be respected e1b839339 <Wes McKinney> Remove outdated unit test, add Python unit test that shows behavior from ARROW-2240 that's been changed 4bc8c8193 <Wes McKinney> Triage type inference logic in presence of a mix of NumPy dtype-having objects and other typed values, pending more serious refactor in ARROW-5564
2019-06-12 17:14:40 -05:00
cdef:
CMemoryPool* pool = maybe_unbox_memory_pool(memory_pool)
bint is_pandas_object = False
bint c_from_pandas
type = ensure_type(type, allow_none=True)
ARROW-4324: [Python] Triage broken type inference logic in presence of a mix of NumPy dtype-having objects and other scalar values In investigating the innocuous bug report from ARROW-4324 I stumbled on a pile of hacks and flawed design around type inference ``` test_list = [np.dtype('int32').type(10), np.dtype('float32').type(0.5)] test_array = pa.array(test_list) # Expected # test_array # <pyarrow.lib.DoubleArray object at 0x7f009963bf48> # [ # 10, # 0.5 # ] # Got # test_array # <pyarrow.lib.Int32Array object at 0x7f009963bf48> # [ # 10, # 0 # ] ``` It turns out there are several issues: * There was a kludge around handling the `numpy.nan` value which is a PyFloat, not a NumPy float64 scalar * Type inference assumed "NaN is null", which should not be hard coded, so I added a flag to switch between pandas semantics and non-pandas * Mixing NumPy scalar values and non-NumPy scalars (like our evil friend numpy.nan) caused the output type to be simply incorrect. For example `[np.float16(1.5), 2.5]` would yield `pa.float16()` output type. Yuck In inserted some hacks to force what I believe to be the correct behavior and fixed a couple unit tests that actually exhibited buggy behavior before (see within). I don't have time to do the "right thing" right now which is to more or less rewrite the hot path of `arrow/python/inference.cc`, so at least this gets the unit tests asserting what is correct so that refactoring will be more productive later. Author: Wes McKinney <wesm+git@apache.org> Closes #4527 from wesm/ARROW-4324 and squashes the following commits: e396958b0 <Wes McKinney> Add unit test for passing pandas Series with from_pandas=False 754468a5d <Wes McKinney> Set from_pandas to None by default in pyarrow.array so that user wishes can be respected e1b839339 <Wes McKinney> Remove outdated unit test, add Python unit test that shows behavior from ARROW-2240 that's been changed 4bc8c8193 <Wes McKinney> Triage type inference logic in presence of a mix of NumPy dtype-having objects and other typed values, pending more serious refactor in ARROW-5564
2019-06-12 17:14:40 -05:00
extension_type = None
if type is not None and type.id == _Type_EXTENSION:
extension_type = type
type = type.storage_type
ARROW-4324: [Python] Triage broken type inference logic in presence of a mix of NumPy dtype-having objects and other scalar values In investigating the innocuous bug report from ARROW-4324 I stumbled on a pile of hacks and flawed design around type inference ``` test_list = [np.dtype('int32').type(10), np.dtype('float32').type(0.5)] test_array = pa.array(test_list) # Expected # test_array # <pyarrow.lib.DoubleArray object at 0x7f009963bf48> # [ # 10, # 0.5 # ] # Got # test_array # <pyarrow.lib.Int32Array object at 0x7f009963bf48> # [ # 10, # 0 # ] ``` It turns out there are several issues: * There was a kludge around handling the `numpy.nan` value which is a PyFloat, not a NumPy float64 scalar * Type inference assumed "NaN is null", which should not be hard coded, so I added a flag to switch between pandas semantics and non-pandas * Mixing NumPy scalar values and non-NumPy scalars (like our evil friend numpy.nan) caused the output type to be simply incorrect. For example `[np.float16(1.5), 2.5]` would yield `pa.float16()` output type. Yuck In inserted some hacks to force what I believe to be the correct behavior and fixed a couple unit tests that actually exhibited buggy behavior before (see within). I don't have time to do the "right thing" right now which is to more or less rewrite the hot path of `arrow/python/inference.cc`, so at least this gets the unit tests asserting what is correct so that refactoring will be more productive later. Author: Wes McKinney <wesm+git@apache.org> Closes #4527 from wesm/ARROW-4324 and squashes the following commits: e396958b0 <Wes McKinney> Add unit test for passing pandas Series with from_pandas=False 754468a5d <Wes McKinney> Set from_pandas to None by default in pyarrow.array so that user wishes can be respected e1b839339 <Wes McKinney> Remove outdated unit test, add Python unit test that shows behavior from ARROW-2240 that's been changed 4bc8c8193 <Wes McKinney> Triage type inference logic in presence of a mix of NumPy dtype-having objects and other typed values, pending more serious refactor in ARROW-5564
2019-06-12 17:14:40 -05:00
if from_pandas is None:
c_from_pandas = False
else:
c_from_pandas = from_pandas
if isinstance(obj, Array):
if type is not None and not obj.type.equals(type):
obj = obj.cast(type, safe=safe, memory_pool=memory_pool)
return obj
if hasattr(obj, '__arrow_array__'):
return _handle_arrow_array_protocol(obj, type, mask, size)
elif hasattr(obj, '__arrow_c_device_array__'):
if type is not None:
requested_type = type.__arrow_c_schema__()
else:
requested_type = None
schema_capsule, array_capsule = obj.__arrow_c_device_array__(requested_type)
out_array = Array._import_from_c_device_capsule(schema_capsule, array_capsule)
if type is not None and out_array.type != type:
# PyCapsule interface type coercion is best effort, so we need to
# check the type of the returned array and cast if necessary
out_array = array.cast(type, safe=safe, memory_pool=memory_pool)
return out_array
elif hasattr(obj, '__arrow_c_array__'):
if type is not None:
requested_type = type.__arrow_c_schema__()
else:
requested_type = None
schema_capsule, array_capsule = obj.__arrow_c_array__(requested_type)
out_array = Array._import_from_c_capsule(schema_capsule, array_capsule)
if type is not None and out_array.type != type:
# PyCapsule interface type coercion is best effort, so we need to
# check the type of the returned array and cast if necessary
out_array = array.cast(type, safe=safe, memory_pool=memory_pool)
return out_array
elif _is_array_like(obj):
if mask is not None:
if _is_array_like(mask):
mask = get_values(mask, &is_pandas_object)
else:
raise TypeError("Mask must be a numpy array "
"when converting numpy arrays")
values = get_values(obj, &is_pandas_object)
ARROW-4324: [Python] Triage broken type inference logic in presence of a mix of NumPy dtype-having objects and other scalar values In investigating the innocuous bug report from ARROW-4324 I stumbled on a pile of hacks and flawed design around type inference ``` test_list = [np.dtype('int32').type(10), np.dtype('float32').type(0.5)] test_array = pa.array(test_list) # Expected # test_array # <pyarrow.lib.DoubleArray object at 0x7f009963bf48> # [ # 10, # 0.5 # ] # Got # test_array # <pyarrow.lib.Int32Array object at 0x7f009963bf48> # [ # 10, # 0 # ] ``` It turns out there are several issues: * There was a kludge around handling the `numpy.nan` value which is a PyFloat, not a NumPy float64 scalar * Type inference assumed "NaN is null", which should not be hard coded, so I added a flag to switch between pandas semantics and non-pandas * Mixing NumPy scalar values and non-NumPy scalars (like our evil friend numpy.nan) caused the output type to be simply incorrect. For example `[np.float16(1.5), 2.5]` would yield `pa.float16()` output type. Yuck In inserted some hacks to force what I believe to be the correct behavior and fixed a couple unit tests that actually exhibited buggy behavior before (see within). I don't have time to do the "right thing" right now which is to more or less rewrite the hot path of `arrow/python/inference.cc`, so at least this gets the unit tests asserting what is correct so that refactoring will be more productive later. Author: Wes McKinney <wesm+git@apache.org> Closes #4527 from wesm/ARROW-4324 and squashes the following commits: e396958b0 <Wes McKinney> Add unit test for passing pandas Series with from_pandas=False 754468a5d <Wes McKinney> Set from_pandas to None by default in pyarrow.array so that user wishes can be respected e1b839339 <Wes McKinney> Remove outdated unit test, add Python unit test that shows behavior from ARROW-2240 that's been changed 4bc8c8193 <Wes McKinney> Triage type inference logic in presence of a mix of NumPy dtype-having objects and other typed values, pending more serious refactor in ARROW-5564
2019-06-12 17:14:40 -05:00
if is_pandas_object and from_pandas is None:
c_from_pandas = True
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
if isinstance(values, np.ma.MaskedArray):
if mask is not None:
raise ValueError("Cannot pass a numpy masked array and "
"specify a mask at the same time")
else:
# don't use shrunken masks
mask = None if values.mask is np.ma.nomask else values.mask
values = values.data
if mask is not None:
if mask.dtype != np.bool_:
raise TypeError("Mask must be boolean dtype")
if mask.ndim != 1:
raise ValueError("Mask must be 1D array")
if len(values) != len(mask):
raise ValueError(
"Mask is a different length from sequence being converted")
if hasattr(values, '__arrow_array__'):
return _handle_arrow_array_protocol(values, type, mask, size)
elif (pandas_api.is_categorical(values) and
type is not None and type.id != Type_DICTIONARY):
result = _ndarray_to_array(
np.asarray(values), mask, type, c_from_pandas, safe, pool
)
elif pandas_api.is_categorical(values):
if type is not None:
index_type = type.index_type
value_type = type.value_type
if values.ordered != type.ordered:
raise ValueError(
"The 'ordered' flag of the passed categorical values "
"does not match the 'ordered' of the specified type. ")
else:
index_type = None
value_type = None
indices = _codes_to_indices(
values.codes, mask, index_type, memory_pool)
try:
dictionary = array(
values.categories.values, type=value_type,
memory_pool=memory_pool)
except TypeError:
# TODO when removing the deprecation warning, this whole
# try/except can be removed (to bubble the TypeError of
# the first array(..) call)
if value_type is not None:
warnings.warn(
"The dtype of the 'categories' of the passed "
f"categorical values ({values.categories.dtype}) does not match the "
f"specified type ({value_type}). For now ignoring the specified "
"type, but in the future this mismatch will raise a "
"TypeError",
FutureWarning, stacklevel=2)
dictionary = array(
values.categories.values, memory_pool=memory_pool)
else:
raise
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
return DictionaryArray.from_arrays(
indices, dictionary, ordered=values.ordered, safe=safe)
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
else:
if pandas_api.have_pandas:
values, type = pandas_api.compat.get_datetimetz_type(
values, obj.dtype, type)
if type and type.id == _Type_RUN_END_ENCODED:
arr = _ndarray_to_array(
values, mask, type.value_type, c_from_pandas, safe, pool)
result = _pc().run_end_encode(arr, run_end_type=type.run_end_type,
memory_pool=memory_pool)
else:
result = _ndarray_to_array(values, mask, type, c_from_pandas, safe,
pool)
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
else:
if type and type.id == _Type_RUN_END_ENCODED:
arr = _sequence_to_array(
obj, mask, size, type.value_type, pool, from_pandas)
result = _pc().run_end_encode(arr, run_end_type=type.run_end_type,
memory_pool=memory_pool)
# ConvertPySequence does strict conversion if type is explicitly passed
else:
result = _sequence_to_array(obj, mask, size, type, pool, c_from_pandas)
if extension_type is not None:
result = ExtensionArray.from_storage(extension_type, result)
return result
def asarray(values, type=None):
"""
Convert to pyarrow.Array, inferring type if not provided.
Parameters
----------
values : array-like
This can be a sequence, numpy.ndarray, pyarrow.Array or
pyarrow.ChunkedArray. If a ChunkedArray is passed, the output will be
a ChunkedArray, otherwise the output will be a Array.
type : string or DataType
Explicitly construct the array with this type. Attempt to cast if
indicated type is different.
Returns
-------
arr : Array or ChunkedArray
"""
if isinstance(values, (Array, ChunkedArray)):
if type is not None and not values.type.equals(type):
values = values.cast(type)
return values
else:
return array(values, type=type)
def nulls(size, type=None, MemoryPool memory_pool=None):
"""
Create a strongly-typed Array instance with all elements null.
Parameters
----------
size : int
Array length.
type : pyarrow.DataType, default None
Explicit type for the array. By default use NullType.
memory_pool : MemoryPool, default None
Arrow MemoryPool to use for allocations. Uses the default memory
pool if not passed.
Returns
-------
arr : Array
Examples
--------
>>> import pyarrow as pa
>>> pa.nulls(10)
<pyarrow.lib.NullArray object at ...>
10 nulls
>>> pa.nulls(3, pa.uint32())
<pyarrow.lib.UInt32Array object at ...>
[
null,
null,
null
]
"""
cdef:
CMemoryPool* pool = maybe_unbox_memory_pool(memory_pool)
int64_t length = size
shared_ptr[CDataType] ty
shared_ptr[CArray] arr
type = ensure_type(type, allow_none=True)
if type is None:
type = null()
ty = pyarrow_unwrap_data_type(type)
with nogil:
arr = GetResultValue(MakeArrayOfNull(ty, length, pool))
return pyarrow_wrap_array(arr)
def repeat(value, size, MemoryPool memory_pool=None):
"""
Create an Array instance whose slots are the given scalar.
Parameters
----------
value : Scalar-like object
Either a pyarrow.Scalar or any python object coercible to a Scalar.
size : int
Number of times to repeat the scalar in the output Array.
memory_pool : MemoryPool, default None
Arrow MemoryPool to use for allocations. Uses the default memory
pool if not passed.
Returns
-------
arr : Array
Examples
--------
>>> import pyarrow as pa
>>> pa.repeat(10, 3)
<pyarrow.lib.Int64Array object at ...>
[
10,
10,
10
]
>>> pa.repeat([1, 2], 2)
<pyarrow.lib.ListArray object at ...>
[
[
1,
2
],
[
1,
2
]
]
>>> pa.repeat("string", 3)
<pyarrow.lib.StringArray object at ...>
[
"string",
"string",
"string"
]
>>> pa.repeat(pa.scalar({'a': 1, 'b': [1, 2]}), 2)
<pyarrow.lib.StructArray object at ...>
-- is_valid: all not null
-- child 0 type: int64
[
1,
1
]
-- child 1 type: list<item: int64>
[
[
1,
2
],
[
1,
2
]
]
"""
cdef:
CMemoryPool* pool = maybe_unbox_memory_pool(memory_pool)
int64_t length = size
shared_ptr[CArray] c_array
shared_ptr[CScalar] c_scalar
if not isinstance(value, Scalar):
value = scalar(value, memory_pool=memory_pool)
c_scalar = (<Scalar> value).unwrap()
with nogil:
c_array = GetResultValue(
MakeArrayFromScalar(deref(c_scalar), length, pool)
)
return pyarrow_wrap_array(c_array)
def infer_type(values, mask=None, from_pandas=False):
"""
Attempt to infer Arrow data type that can hold the passed Python
sequence type in an Array object
Parameters
----------
values : array-like
Sequence to infer type from.
mask : ndarray (bool type), optional
Optional exclusion mask where True marks null, False non-null.
from_pandas : bool, default False
Use pandas's NA/null sentinel values for type inference.
Returns
-------
type : DataType
"""
cdef:
shared_ptr[CDataType] out
c_bool use_pandas_sentinels = from_pandas
if mask is not None and not isinstance(mask, np.ndarray):
mask = np.array(mask, dtype=bool)
ARROW-9992: [C++][Python] Refactor python to arrow conversions based on a reusable conversion API ### Targets of the refactoring: - PythonToArrow converters based on a common API - PyBytesView to use `Result` return values and contain `is_utf8` flag - PyConversionOptions is now available from all converters so we can honor its flags ### Fixes - ARROW-9993 [Python] Tzinfo - string roundtrip fails on pytz.StaticTzInfo objects - ARROW-9994 [C++][Python] Auto chunking nested array containing binary-like fields result malformed output - ARROW-9996 [C++] Dictionary is unset when calling DictionaryArray.GetScalar for null values - ~ARROW-9997 [Python] StructScalar.as_py() fails if the type has duplicate field names~ - ARROW-9999 [Python] Support constructing dictionary array directly through pa.array() - ARROW-10000 [C++][Python] Support constructing StructArray from list of key-value pairs - ARROW-9593 [Python] Add custom pickle reducers for DictionaryScalar - ARROW-6281 [Python] Produce chunked arrays for nested types in pyarrow.array - ARROW-2367 [Python] ListArray has trouble with sizes greater than kMaximumCapacity - ARROW-9976: [Python] ArrowCapacityError when doing Table.from_pandas with large dataframe ### Backward incompatibility ~~Since a struct type can contain duplicated field names we cannot return a struct scalar as a mapping, so I had to change the `.as_py()` representation to return with a list of key-value pairs.~~ ### TODOs: - [x] ensure that the large memory tests are passing - [x] benchmark and check binary size again ### Library size Before: ``` 12M Sep 25 15:05 libarrow.200.0.0.dylib 2.7M Sep 25 15:07 libarrow_python.200.0.0.dylib ``` After: ``` 12M Sep 25 15:46 libarrow.200.0.0.dylib 2.1M Sep 25 15:50 libarrow_python.200.0.0.dylib ``` ### Benchmarks Executed the following ASV benchmark: ```bash asv continuous --bench convert_builtins master py2ar --no-only-changed --split ``` After some optimization: ``` Benchmarks that have improved: before after ratio [f358a29b] [18d1c052] <master> <py2ar> - 2.78±0.03ms 2.45±0.03ms 0.88 convert_builtins.ConvertPyListToArray.time_convert('bool') - 3.59±0.01ms 3.12±0.02ms 0.87 convert_builtins.ConvertPyListToArray.time_convert('int32') - 3.37±0.01ms 2.73±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('uint32') - 3.74±0.02ms 3.03±0.01ms 0.81 convert_builtins.ConvertPyListToArray.time_convert('int64') - 3.38±0.01ms 2.69±0.01ms 0.80 convert_builtins.ConvertPyListToArray.time_convert('uint64') - 2.83±0.01ms 2.24±0.01ms 0.79 convert_builtins.ConvertPyListToArray.time_convert('float32') - 3.92±0.02ms 2.99±0.02ms 0.76 convert_builtins.ConvertPyListToArray.time_convert('binary10') - 14.1±0.04ms 8.89±0.05ms 0.63 convert_builtins.ConvertPyListToArray.time_convert('unicode') - 5.60±0.01ms 3.24±0.03ms 0.58 convert_builtins.ConvertPyListToArray.time_convert('ascii') - 5.37±0.02ms 2.91±0.04ms 0.54 convert_builtins.ConvertPyListToArray.time_convert('binary') Benchmarks that have stayed the same: before after ratio [f358a29b] [18d1c052] <master> <py2ar> 14.8±0.02ms 15.5±0.1ms 1.05 convert_builtins.ConvertPyListToArray.time_convert('decimal') 16.4±0.7ms 15.1±0.6ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('struct from tuples') 34.4±0.3ms 31.5±0.4ms 0.92 convert_builtins.ConvertPyListToArray.time_convert('int64 list') 16.7±0.7ms 15.1±0.6ms ~0.91 convert_builtins.ConvertPyListToArray.time_convert('struct') 2.42±0.02ms 2.05±0.03ms ~0.85 convert_builtins.ConvertPyListToArray.time_convert('float64') ``` Closes #8088 from kszucs/py2ar Authored-by: Krisztián Szűcs <szucs.krisztian@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2020-09-25 20:49:16 -04:00
out = GetResultValue(InferArrowType(values, mask, use_pandas_sentinels))
return pyarrow_wrap_data_type(out)
def arange(int64_t start, int64_t stop, int64_t step=1, *, memory_pool=None):
"""
Create an array of evenly spaced values within a given interval.
This function is similar to Python's `range` function.
The resulting array will contain values starting from `start` up to but not
including `stop`, with a step size of `step`.
Parameters
----------
start : int
The starting value for the sequence. The returned array will include this value.
stop : int
The stopping value for the sequence. The returned array will not include this value.
step : int, default 1
The spacing between values.
memory_pool : MemoryPool, optional
A memory pool to use for memory allocations.
Raises
------
ArrowInvalid
If `step` is zero.
Returns
-------
arange : Array
"""
cdef CMemoryPool* pool = maybe_unbox_memory_pool(memory_pool)
with nogil:
c_array = GetResultValue(Arange(start, stop, step, pool))
return pyarrow_wrap_array(c_array)
def _normalize_slice(object arrow_obj, slice key):
"""
Slices with step not equal to 1 (or None) will produce a copy
rather than a zero-copy view
"""
cdef:
int64_t start, stop, step
Py_ssize_t n = len(arrow_obj)
start, stop, step = key.indices(n)
if step != 1:
return arrow_obj.take(arange(start, stop, step))
else:
length = max(stop - start, 0)
return arrow_obj.slice(start, length)
cdef Py_ssize_t _normalize_index(Py_ssize_t index,
Py_ssize_t length) except -1:
if index < 0:
index += length
if index < 0:
raise IndexError("index out of bounds")
elif index >= length:
raise IndexError("index out of bounds")
return index
ARROW-1559: [C++] Add Unique kernel and refactor DictionaryBuilder to be a stateful kernel Only intended to implement selective categorical conversion in `to_pandas()` but it seems that there is a lot missing to do this in a clean fashion. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1266 from xhochy/ARROW-1559 and squashes the following commits: 50249652 [Wes McKinney] Fix MSVC linker issue b6cb1ece [Wes McKinney] Export CastOptions 4ea3ce61 [Wes McKinney] Return NONE Datum in else branch of functions 4f969c6b [Wes McKinney] Move deprecation suppression after flag munging 7f557cc0 [Wes McKinney] Code review comments, disable C4996 warning (equivalent to -Wno-deprecated) in MSVC builds 84717461 [Wes McKinney] Do not compute hash table threshold on each iteration ae8f2339 [Wes McKinney] Fix double to int64_t conversion warning c1444a26 [Wes McKinney] Fix doxygen warnings 2de85961 [Wes McKinney] Add test cases for unique, dictionary_encode 383b46fd [Wes McKinney] Add Array methods for Unique, DictionaryEncode 0962f06b [Wes McKinney] Add cast method for Column, chunked_array and column factory functions 62c3cefd [Wes McKinney] Datum stubs 27151c47 [Wes McKinney] Implement Cast for chunked arrays, fix kernel implementation. Change kernel API to write to a single Datum 1bf2e2f4 [Wes McKinney] Fix bug with column using wrong type eaadc3e5 [Wes McKinney] Use macros to reduce code duplication in DoubleTableSize 6b4f8f3c [Wes McKinney] Fix datetime64->date32 casting error raised by refactor 2c77a19e [Wes McKinney] Some Decimal->Decimal128 renaming. Add DecimalType base class c07f91b3 [Wes McKinney] ARROW-1559: Add unique kernel
2017-11-17 18:29:49 -05:00
cdef wrap_datum(const CDatum& datum):
if datum.kind() == DatumType_ARRAY:
return pyarrow_wrap_array(MakeArray(datum.array()))
elif datum.kind() == DatumType_CHUNKED_ARRAY:
return pyarrow_wrap_chunked_array(datum.chunked_array())
elif datum.kind() == DatumType_RECORD_BATCH:
return pyarrow_wrap_batch(datum.record_batch())
elif datum.kind() == DatumType_TABLE:
return pyarrow_wrap_table(datum.table())
elif datum.kind() == DatumType_SCALAR:
return pyarrow_wrap_scalar(datum.scalar())
ARROW-1559: [C++] Add Unique kernel and refactor DictionaryBuilder to be a stateful kernel Only intended to implement selective categorical conversion in `to_pandas()` but it seems that there is a lot missing to do this in a clean fashion. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1266 from xhochy/ARROW-1559 and squashes the following commits: 50249652 [Wes McKinney] Fix MSVC linker issue b6cb1ece [Wes McKinney] Export CastOptions 4ea3ce61 [Wes McKinney] Return NONE Datum in else branch of functions 4f969c6b [Wes McKinney] Move deprecation suppression after flag munging 7f557cc0 [Wes McKinney] Code review comments, disable C4996 warning (equivalent to -Wno-deprecated) in MSVC builds 84717461 [Wes McKinney] Do not compute hash table threshold on each iteration ae8f2339 [Wes McKinney] Fix double to int64_t conversion warning c1444a26 [Wes McKinney] Fix doxygen warnings 2de85961 [Wes McKinney] Add test cases for unique, dictionary_encode 383b46fd [Wes McKinney] Add Array methods for Unique, DictionaryEncode 0962f06b [Wes McKinney] Add cast method for Column, chunked_array and column factory functions 62c3cefd [Wes McKinney] Datum stubs 27151c47 [Wes McKinney] Implement Cast for chunked arrays, fix kernel implementation. Change kernel API to write to a single Datum 1bf2e2f4 [Wes McKinney] Fix bug with column using wrong type eaadc3e5 [Wes McKinney] Use macros to reduce code duplication in DoubleTableSize 6b4f8f3c [Wes McKinney] Fix datetime64->date32 casting error raised by refactor 2c77a19e [Wes McKinney] Some Decimal->Decimal128 renaming. Add DecimalType base class c07f91b3 [Wes McKinney] ARROW-1559: Add unique kernel
2017-11-17 18:29:49 -05:00
else:
raise ValueError("Unable to wrap Datum in a Python object")
cdef _append_array_buffers(const CArrayData* ad, list res):
"""
Recursively append Buffer wrappers from *ad* and its children.
"""
cdef size_t i, n
assert ad != NULL
n = ad.buffers.size()
for i in range(n):
buf = ad.buffers[i]
res.append(pyarrow_wrap_buffer(buf)
if buf.get() != NULL else None)
n = ad.child_data.size()
for i in range(n):
_append_array_buffers(ad.child_data[i].get(), res)
cdef _reduce_array_data(const CArrayData* ad):
"""
Recursively dissect ArrayData to (pickable) tuples.
"""
cdef size_t i, n
assert ad != NULL
n = ad.buffers.size()
buffers = []
for i in range(n):
buf = ad.buffers[i]
buffers.append(pyarrow_wrap_buffer(buf)
if buf.get() != NULL else None)
children = []
n = ad.child_data.size()
for i in range(n):
children.append(_reduce_array_data(ad.child_data[i].get()))
if ad.dictionary.get() != NULL:
dictionary = _reduce_array_data(ad.dictionary.get())
else:
dictionary = None
return pyarrow_wrap_data_type(ad.type), ad.length, ad.null_count, \
ad.offset, buffers, children, dictionary
cdef shared_ptr[CArrayData] _reconstruct_array_data(data):
"""
Reconstruct CArrayData objects from the tuple structure generated
by _reduce_array_data.
"""
cdef:
int64_t length, null_count, offset, i
DataType dtype
Buffer buf
vector[shared_ptr[CBuffer]] c_buffers
vector[shared_ptr[CArrayData]] c_children
shared_ptr[CArrayData] c_dictionary
dtype, length, null_count, offset, buffers, children, dictionary = data
for i in range(len(buffers)):
buf = buffers[i]
if buf is None:
c_buffers.push_back(shared_ptr[CBuffer]())
else:
c_buffers.push_back(buf.buffer)
for i in range(len(children)):
c_children.push_back(_reconstruct_array_data(children[i]))
if dictionary is not None:
c_dictionary = _reconstruct_array_data(dictionary)
return CArrayData.MakeWithChildrenAndDictionary(
dtype.sp_type,
length,
c_buffers,
c_children,
c_dictionary,
null_count,
offset)
def _restore_array(data):
"""
Reconstruct an Array from pickled ArrayData.
"""
cdef shared_ptr[CArrayData] ad = _reconstruct_array_data(data)
return pyarrow_wrap_array(MakeArray(ad))
cdef class ArrayStatistics(_Weakrefable):
"""
The class for statistics of an array.
"""
def __init__(self):
raise TypeError(f"Do not call {self.__class__.__name__}'s constructor "
"directly")
cdef void init(self, const shared_ptr[CArrayStatistics]& sp_statistics):
self.sp_statistics = sp_statistics
def __repr__(self):
return (f"arrow.ArrayStatistics<null_count={self.null_count}, "
f"distinct_count={self.distinct_count}, min={self.min}, "
f"is_min_exact={self.is_min_exact}, max={self.max}, "
f"is_max_exact={self.is_max_exact}>")
@property
def null_count(self):
"""
The number of nulls.
"""
null_count = self.sp_statistics.get().null_count
# We'll be able to simplify this after
# https://github.com/cython/cython/issues/6692 is solved.
if not null_count.has_value():
return None
value = null_count.value()
if holds_alternative[int64_t](value):
return get[int64_t](value)
else:
return get[double](value)
@property
def is_null_count_exact(self):
"""
Whether the number of null values is a valid exact value or not.
"""
null_count = self.sp_statistics.get().null_count
if not null_count.has_value():
return False
value = null_count.value()
return holds_alternative[int64_t](value)
@property
def distinct_count(self):
"""
The number of distinct values.
"""
distinct_count = self.sp_statistics.get().distinct_count
if not distinct_count.has_value():
return None
value = distinct_count.value()
if holds_alternative[int64_t](value):
return get[int64_t](value)
else:
return get[double](value)
@property
def is_distinct_count_exact(self):
"""
Whether the number of distinct values is a valid exact value or not.
"""
distinct_count = self.sp_statistics.get().distinct_count
if not distinct_count.has_value():
return False
value = distinct_count.value()
return holds_alternative[int64_t](value)
@property
def min(self):
"""
The minimum value.
"""
return self._get_value(self.sp_statistics.get().min)
@property
def is_min_exact(self):
"""
Whether the minimum value is an exact value or not.
"""
return self.sp_statistics.get().is_min_exact
@property
def max(self):
"""
The maximum value.
"""
return self._get_value(self.sp_statistics.get().max)
@property
def is_max_exact(self):
"""
Whether the maximum value is an exact value or not.
"""
return self.sp_statistics.get().is_max_exact
cdef _get_value(self, const optional[CArrayStatisticsValueType]& optional_value):
"""
Get a raw value from
std::optional<arrow::ArrayStatistics::ValueType>> data.
arrow::ArrayStatistics::ValueType is
std::variant<bool, int64_t, uint64_t, double, std::string>.
"""
if not optional_value.has_value():
return None
value = optional_value.value()
if holds_alternative[c_bool](value):
return get[c_bool](value)
elif holds_alternative[int64_t](value):
return get[int64_t](value)
elif holds_alternative[uint64_t](value):
return get[uint64_t](value)
elif holds_alternative[double](value):
return get[double](value)
else:
return get[c_string](value)
cdef class _PandasConvertible(_Weakrefable):
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
def to_pandas(
self,
memory_pool=None,
categories=None,
bint strings_to_categorical=False,
bint zero_copy_only=False,
bint integer_object_nulls=False,
bint date_as_object=True,
bint timestamp_as_object=False,
bint use_threads=True,
bint deduplicate_objects=True,
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
bint ignore_metadata=False,
bint safe=True,
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
bint split_blocks=False,
ARROW-7569: [Python] Add API to map Arrow types to pandas ExtensionDtypes in to_pandas conversions See https://issues.apache.org/jira/browse/ARROW-7569 and https://issues.apache.org/jira/browse/ARROW-2428 for context. https://github.com/apache/arrow/pull/5512 only covered the first 2 cases described in ARROW-2428, this also tries to cover the third case. This PR adds a `types_mapping` to `Table.to_pandas` to specify pandas ExtensionDtypes for built-in arrow types to use in the conversion. One specific example use case for this ability is to convert arrow integer types to pandas' nullable integer dtype instead of to numpy integer dtype (or for one of the other custom nullable dtypes in pandas). For example: ``` table.to_pandas(types_mapping={pa.int64(): pd.Int64Dtype()}) ``` will avoid to convert the int columns first to numpy dtype (possibly float) by directly constructing the pandas nullable dtype. Need to add more tests, and one important concern is that using a pyarrow type instance as the dict key might not easily work for parametrized types (eg timestamp with resolution / timezone). Closes #6189 from jorisvandenbossche/ARROW-7569-to-pandas-types-mapping and squashes the following commits: cb82f5c21 <Joris Van den Bossche> expand tests 1d9c37ca1 <Joris Van den Bossche> simplify (remove unused extension_columns arg) b61b1f5ac <Joris Van den Bossche> dict -> function f3464b15a <Joris Van den Bossche> ARROW-7569: Add API to map Arrow types to pandas ExtensionDtypes for to_pandas conversions Authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Signed-off-by: Neal Richardson <neal.p.richardson@gmail.com>
2020-01-23 09:42:42 -08:00
bint self_destruct=False,
str maps_as_pydicts=None,
types_mapper=None,
bint coerce_temporal_nanoseconds=False
):
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
"""
Convert to a pandas-compatible NumPy array or DataFrame, as appropriate
Parameters
----------
memory_pool : MemoryPool, default None
Arrow MemoryPool to use for allocations. Uses the default memory
pool if not passed.
categories : list, default empty
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
List of fields that should be returned as pandas.Categorical. Only
applies to table-like data structures.
strings_to_categorical : bool, default False
Encode string (UTF8) and binary types to pandas.Categorical.
zero_copy_only : bool, default False
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
Raise an ArrowException if this function call would require copying
the underlying data.
integer_object_nulls : bool, default False
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
Cast integers with nulls to objects
date_as_object : bool, default True
Cast dates to objects. If False, convert to datetime64 dtype with
the equivalent time unit (if supported). Note: in pandas version
< 2.0, only datetime64[ns] conversion is supported.
timestamp_as_object : bool, default False
Cast non-nanosecond timestamps (np.datetime64) to objects. This is
useful in pandas version 1.x if you have timestamps that don't fit
in the normal date range of nanosecond timestamps (1678 CE-2262 CE).
Non-nanosecond timestamps are supported in pandas version 2.0.
If False, all timestamps are converted to datetime64 dtype.
use_threads : bool, default True
Whether to parallelize the conversion using multiple threads.
deduplicate_objects : bool, default True
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
Do not create multiple copies Python objects when created, to save
on memory use. Conversion will be slower.
ignore_metadata : bool, default False
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
If True, do not use the 'pandas' metadata to reconstruct the
DataFrame index, if present
safe : bool, default True
For certain data types, a cast is needed in order to store the
data in a pandas DataFrame or Series (e.g. timestamps are always
stored as nanoseconds in pandas). This option controls whether it
is a safe cast or not.
split_blocks : bool, default False
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
If True, generate one internal "block" for each column when
creating a pandas.DataFrame from a RecordBatch or Table. While this
can temporarily reduce memory note that various pandas operations
can trigger "consolidation" which may balloon memory use.
self_destruct : bool, default False
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
EXPERIMENTAL: If True, attempt to deallocate the originating Arrow
memory while converting the Arrow object to pandas. If you use the
object after calling to_pandas with this option it will crash your
program.
Note that you may not see always memory usage improvements. For
example, if multiple columns share an underlying allocation,
memory can't be freed until all columns are converted.
maps_as_pydicts : str, optional, default `None`
Valid values are `None`, 'lossy', or 'strict'.
The default behavior (`None`), is to convert Arrow Map arrays to
Python association lists (list-of-tuples) in the same order as the
Arrow Map, as in [(key1, value1), (key2, value2), ...].
If 'lossy' or 'strict', convert Arrow Map arrays to native Python dicts.
This can change the ordering of (key, value) pairs, and will
deduplicate multiple keys, resulting in a possible loss of data.
If 'lossy', this key deduplication results in a warning printed
when detected. If 'strict', this instead results in an exception
being raised when detected.
ARROW-7569: [Python] Add API to map Arrow types to pandas ExtensionDtypes in to_pandas conversions See https://issues.apache.org/jira/browse/ARROW-7569 and https://issues.apache.org/jira/browse/ARROW-2428 for context. https://github.com/apache/arrow/pull/5512 only covered the first 2 cases described in ARROW-2428, this also tries to cover the third case. This PR adds a `types_mapping` to `Table.to_pandas` to specify pandas ExtensionDtypes for built-in arrow types to use in the conversion. One specific example use case for this ability is to convert arrow integer types to pandas' nullable integer dtype instead of to numpy integer dtype (or for one of the other custom nullable dtypes in pandas). For example: ``` table.to_pandas(types_mapping={pa.int64(): pd.Int64Dtype()}) ``` will avoid to convert the int columns first to numpy dtype (possibly float) by directly constructing the pandas nullable dtype. Need to add more tests, and one important concern is that using a pyarrow type instance as the dict key might not easily work for parametrized types (eg timestamp with resolution / timezone). Closes #6189 from jorisvandenbossche/ARROW-7569-to-pandas-types-mapping and squashes the following commits: cb82f5c21 <Joris Van den Bossche> expand tests 1d9c37ca1 <Joris Van den Bossche> simplify (remove unused extension_columns arg) b61b1f5ac <Joris Van den Bossche> dict -> function f3464b15a <Joris Van den Bossche> ARROW-7569: Add API to map Arrow types to pandas ExtensionDtypes for to_pandas conversions Authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Signed-off-by: Neal Richardson <neal.p.richardson@gmail.com>
2020-01-23 09:42:42 -08:00
types_mapper : function, default None
A function mapping a pyarrow DataType to a pandas ExtensionDtype.
This can be used to override the default pandas type for conversion
of built-in pyarrow types or in absence of pandas_metadata in the
Table schema. The function receives a pyarrow DataType and is
expected to return a pandas ExtensionDtype or ``None`` if the
default conversion should be used for that type. If you have
a dictionary mapping, you can pass ``dict.get`` as function.
coerce_temporal_nanoseconds : bool, default False
Only applicable to pandas version >= 2.0.
A legacy option to coerce date32, date64, duration, and timestamp
time units to nanoseconds when converting to pandas. This is the
default behavior in pandas version 1.x. Set this option to True if
you'd like to use this coercion when using pandas version >= 2.0
for backwards compatibility (not recommended otherwise).
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
Returns
-------
pandas.Series or pandas.DataFrame depending on type of object
Examples
--------
>>> import pyarrow as pa
>>> import pandas as pd
Convert a Table to pandas DataFrame:
>>> table = pa.table([
... pa.array([2, 4, 5, 100]),
... pa.array(["Flamingo", "Horse", "Brittle stars", "Centipede"])
... ], names=['n_legs', 'animals'])
>>> table.to_pandas()
n_legs animals
0 2 Flamingo
1 4 Horse
2 5 Brittle stars
3 100 Centipede
>>> isinstance(table.to_pandas(), pd.DataFrame)
True
Convert a RecordBatch to pandas DataFrame:
>>> import pyarrow as pa
>>> n_legs = pa.array([2, 4, 5, 100])
>>> animals = pa.array(["Flamingo", "Horse", "Brittle stars", "Centipede"])
>>> batch = pa.record_batch([n_legs, animals],
... names=["n_legs", "animals"])
>>> batch
pyarrow.RecordBatch
n_legs: int64
animals: string
----
n_legs: [2,4,5,100]
animals: ["Flamingo","Horse","Brittle stars","Centipede"]
>>> batch.to_pandas()
n_legs animals
0 2 Flamingo
1 4 Horse
2 5 Brittle stars
3 100 Centipede
>>> isinstance(batch.to_pandas(), pd.DataFrame)
True
Convert a Chunked Array to pandas Series:
>>> import pyarrow as pa
>>> n_legs = pa.chunked_array([[2, 2, 4], [4, 5, 100]])
>>> n_legs.to_pandas()
0 2
1 2
2 4
3 4
4 5
5 100
dtype: int64
>>> isinstance(n_legs.to_pandas(), pd.Series)
True
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
"""
options = dict(
pool=memory_pool,
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
strings_to_categorical=strings_to_categorical,
zero_copy_only=zero_copy_only,
integer_object_nulls=integer_object_nulls,
date_as_object=date_as_object,
timestamp_as_object=timestamp_as_object,
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
use_threads=use_threads,
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
deduplicate_objects=deduplicate_objects,
safe=safe,
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
split_blocks=split_blocks,
self_destruct=self_destruct,
maps_as_pydicts=maps_as_pydicts,
coerce_temporal_nanoseconds=coerce_temporal_nanoseconds
)
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
return self._to_pandas(options, categories=categories,
ARROW-7569: [Python] Add API to map Arrow types to pandas ExtensionDtypes in to_pandas conversions See https://issues.apache.org/jira/browse/ARROW-7569 and https://issues.apache.org/jira/browse/ARROW-2428 for context. https://github.com/apache/arrow/pull/5512 only covered the first 2 cases described in ARROW-2428, this also tries to cover the third case. This PR adds a `types_mapping` to `Table.to_pandas` to specify pandas ExtensionDtypes for built-in arrow types to use in the conversion. One specific example use case for this ability is to convert arrow integer types to pandas' nullable integer dtype instead of to numpy integer dtype (or for one of the other custom nullable dtypes in pandas). For example: ``` table.to_pandas(types_mapping={pa.int64(): pd.Int64Dtype()}) ``` will avoid to convert the int columns first to numpy dtype (possibly float) by directly constructing the pandas nullable dtype. Need to add more tests, and one important concern is that using a pyarrow type instance as the dict key might not easily work for parametrized types (eg timestamp with resolution / timezone). Closes #6189 from jorisvandenbossche/ARROW-7569-to-pandas-types-mapping and squashes the following commits: cb82f5c21 <Joris Van den Bossche> expand tests 1d9c37ca1 <Joris Van den Bossche> simplify (remove unused extension_columns arg) b61b1f5ac <Joris Van den Bossche> dict -> function f3464b15a <Joris Van den Bossche> ARROW-7569: Add API to map Arrow types to pandas ExtensionDtypes for to_pandas conversions Authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Signed-off-by: Neal Richardson <neal.p.richardson@gmail.com>
2020-01-23 09:42:42 -08:00
ignore_metadata=ignore_metadata,
types_mapper=types_mapper)
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
cdef PandasOptions _convert_pandas_options(dict options):
cdef PandasOptions result
result.pool = maybe_unbox_memory_pool(options['pool'])
result.strings_to_categorical = options['strings_to_categorical']
result.zero_copy_only = options['zero_copy_only']
result.integer_object_nulls = options['integer_object_nulls']
result.date_as_object = options['date_as_object']
result.timestamp_as_object = options['timestamp_as_object']
result.use_threads = options['use_threads']
result.deduplicate_objects = options['deduplicate_objects']
result.safe_cast = options['safe']
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
result.split_blocks = options['split_blocks']
result.self_destruct = options['self_destruct']
result.coerce_temporal_nanoseconds = options['coerce_temporal_nanoseconds']
result.ignore_timezone = os.environ.get('PYARROW_IGNORE_TIMEZONE', False)
maps_as_pydicts = options['maps_as_pydicts']
if maps_as_pydicts is None:
result.maps_as_pydicts = MapConversionType.DEFAULT
elif maps_as_pydicts == "lossy":
result.maps_as_pydicts = MapConversionType.LOSSY
elif maps_as_pydicts == "strict":
result.maps_as_pydicts = MapConversionType.STRICT_
else:
raise ValueError(
"Invalid value for 'maps_as_pydicts': "
+ "valid values are 'lossy', 'strict' or `None` (default). "
+ f"Received '{maps_as_pydicts}'."
)
return result
ARROW-3928: [Python] Deduplicate Python objects when converting binary, string, date, time types to object arrays This adds a `deduplicate_objects` option to all of the `to_pandas` methods. It works with string types, date types (when `date_as_object=True`), and time types. I also made it so that `ScalarMemoTable` can be used with `string_view`, for more efficient memoization in this case. I made the default for `deduplicate_objects` is True. When the ratio of unique strings to the length of the array is low, not only does this use drastically less memory, it is also faster. I will write some benchmarks to show where the "crossover point" is when the overhead of hashing makes things slower. Let's consider a simple case where we have 10,000,000 strings of length 10, but only 1000 unique values: ``` In [50]: import pandas.util.testing as tm In [51]: unique_values = [tm.rands(10) for i in range(1000)] In [52]: values = unique_values * 10000 In [53]: arr = pa.array(values) In [54]: timeit arr.to_pandas() 236 ms ± 1.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [55]: timeit arr.to_pandas(deduplicate_objects=False) 730 ms ± 12.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` Almost 3 times faster in this case. The different in memory use is even more drastic ``` In [44]: unique_values = [tm.rands(10) for i in range(1000)] In [45]: values = unique_values * 10000 In [46]: arr = pa.array(values) In [49]: %memit result11 = arr.to_pandas() peak memory: 1505.89 MiB, increment: 76.27 MiB In [50]: %memit result12 = arr.to_pandas(deduplicate_objects=False) peak memory: 2202.29 MiB, increment: 696.11 MiB ``` As you can see, this is a huge problem. If our bug reports about Parquet memory use problems are any indication, users have been suffering from this issue for a long time. When the strings are mostly unique, then things are slower as expected, the peak memory use is higher because of the hash table ``` In [17]: unique_values = [tm.rands(10) for i in range(500000)] In [18]: values = unique_values * 2 In [19]: arr = pa.array(values) In [20]: timeit result = arr.to_pandas() 177 ms ± 574 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [21]: timeit result = arr.to_pandas(deduplicate_objects=False) 70.1 ms ± 783 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) In [42]: %memit result8 = arr.to_pandas() peak memory: 644.39 MiB, increment: 92.23 MiB In [43]: %memit result9 = arr.to_pandas(deduplicate_objects=False) peak memory: 610.85 MiB, increment: 58.41 MiB ``` In real world work, many duplicated strings is the most common use case. Given the massive memory use and moderate performance improvements, it makes sense to have this enabled by default. Author: Wes McKinney <wesm+git@apache.org> Closes #3257 from wesm/ARROW-3928 and squashes the following commits: d9a88700 <Wes McKinney> Prettier output a00b51c7 <Wes McKinney> Add benchmarks for object deduplication ca88b963 <Wes McKinney> Add Python unit tests, deduplicate for date and time types also when converting to Python objects 7a7873b8 <Wes McKinney> First working iteration of string deduplication when calling to_pandas
2018-12-27 12:17:50 -06:00
cdef class Array(_PandasConvertible):
"""
The base class for all Arrow arrays.
"""
ARROW-2638: [Python] Prevent calling extension class constructors directly Using http://docs.cython.org/en/latest/src/userguide/extension_types.html#fast-instantiation What do You think @pitrou ? Should I implement for the rest of the classes? (Field, Schema etc.) Author: Krisztián Szűcs <szucs.krisztian@gmail.com> Author: Wes McKinney <wesm+git@apache.org> Closes #2085 from kszucs/ARROW-2638 and squashes the following commits: 6c3d9377 <Wes McKinney> Restore DataType to public API ae022d6c <Krisztián Szűcs> private constructor for PlasmaBuffer ee401c61 <Krisztián Szűcs> correct Tensor's error msg 597da834 <Krisztián Szűcs> remove Tensor._validate ef3b9a92 <Krisztián Szűcs> prevent constructing Array and Tensor 71800514 <Krisztián Szűcs> prevent directly constructing Buffer and remove _check_nullptr methods 08b899f1 <Krisztián Szűcs> remove _check_nullptr from ChunkedArray 336d3665 <Krisztián Szűcs> prevent directly constructing ChunkedArray 00a2869f <Krisztián Szűcs> remove _check_nullptr from RecordBatch, Table, Column 6c1edbe3 <Krisztián Szűcs> prevent directly constructing Column 209b8961 <Krisztián Szűcs> remove _check_null methods from types.pxi a96da893 <Krisztián Szűcs> remove DataType, NAType, TimestampType from public API 0231fab4 <Krisztián Szűcs> test struct __len__ and __iter__ fa3e4b4a <Krisztián Szűcs> test more types e1fa710d <Krisztián Szűcs> construct schema via pyarrow_wrap_schema a42c3090 <Krisztián Szűcs> refactor datatype, field and schema pickling to use __reduce__ instead of __getstate__ and __setstate__ methods c13247cd <Krisztián Szűcs> use TypeError instead of RuntimeError bdddf44e <Krisztián Szűcs> datatype, field and schema 102b9b09 <Krisztián Szűcs> recordbatch and table
2018-06-03 23:59:18 -04:00
def __init__(self):
raise TypeError(f"Do not call {self.__class__.__name__}'s constructor "
"directly, use one of the `pyarrow.Array.from_*` "
"functions instead.")
cdef void init(self, const shared_ptr[CArray]& sp_array) except *:
self.sp_array = sp_array
self.ap = sp_array.get()
self.type = pyarrow_wrap_data_type(self.sp_array.get().type())
ARROW-1199: [C++] Implement mutable POD struct for Array data This data structure provides a new internal data structure that is a self-contained representation of the memory and metadata inside an Arrow array data structure. This class is designed for easy internal data manipulation, analytical data processing, and data transport to and from IPC messages. For example, we could cast from int64 to float64 like so: ```c++ Int64Array arr = GetMyData(); std::shared_ptr<internal::ArrayData> new_data = arr->data()->ShallowCopy(); new_data->type = arrow::float64(); Float64Array double_arr(new_data); ``` This object is also useful in an analytics setting where memory may be reused. For example, if we had a group of operations all returning doubles, say: ``` Log(Sqrt(Expr(arr)) ``` Then the low-level implementations of each of these functions could have the signatures void Log(const ArrayData& values, ArrayData* out); As another example a function may consume one or more memory buffers in an input array and replace them with newly-allocated data, changing the output data type as well. I did quite a bit of refactoring and code simplification that was enabled by this patch. I note that performance in IPC loading of very wide record batches is about 15% slower, but in smaller record batches it is about the same in microbenchmarks. This code path could possibly be made faster with some performance analysis work. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #824 from wesm/array-data-internals and squashes the following commits: f1acbae1 [Wes McKinney] MSVC fixes dcdf2b29 [Wes McKinney] Fix glib per C++ API changes d0a8ee2b [Wes McKinney] Fix logic error in UnsafeSetNotNull d17f886c [Wes McKinney] Construct dictionary indices in ctor bba42530 [Wes McKinney] Set correct type when creating BinaryArray ba3b2992 [Wes McKinney] Various fixes, Python fixes, add Array operator<< to std::ostream for debugging 0b8af24a [Wes McKinney] Write field metadata directly into output object 05058638 [Wes McKinney] Fix up cmake 75bc6b4f [Wes McKinney] Delete cruft from array/loader.h and consolidate in arrow/ipc 24df1b97 [Wes McKinney] Review comments, add some doxygen comments 6e2e5720 [Wes McKinney] Preallocate vector of shared_ptr 05b806b2 [Wes McKinney] Tests passing again 5bdd6a99 [Wes McKinney] bug fixes 7894496e [Wes McKinney] Some fixes bf91a75a [Wes McKinney] Refactor to use shared_ptr, not yet working 130f0c1a [Wes McKinney] Use std::move instead of std::forward a9b4031b [Wes McKinney] Add move constructors to reduce unnecessary copying 475a3db6 [Wes McKinney] Bug fixes, test suite passing again 16918279 [Wes McKinney] Array internals refactoring to use POD struct for all buffers, auxiliary metadata
2017-07-11 01:39:20 -04:00
def _debug_print(self):
with nogil:
check_status(DebugPrint(deref(self.ap), 0))
ARROW-1199: [C++] Implement mutable POD struct for Array data This data structure provides a new internal data structure that is a self-contained representation of the memory and metadata inside an Arrow array data structure. This class is designed for easy internal data manipulation, analytical data processing, and data transport to and from IPC messages. For example, we could cast from int64 to float64 like so: ```c++ Int64Array arr = GetMyData(); std::shared_ptr<internal::ArrayData> new_data = arr->data()->ShallowCopy(); new_data->type = arrow::float64(); Float64Array double_arr(new_data); ``` This object is also useful in an analytics setting where memory may be reused. For example, if we had a group of operations all returning doubles, say: ``` Log(Sqrt(Expr(arr)) ``` Then the low-level implementations of each of these functions could have the signatures void Log(const ArrayData& values, ArrayData* out); As another example a function may consume one or more memory buffers in an input array and replace them with newly-allocated data, changing the output data type as well. I did quite a bit of refactoring and code simplification that was enabled by this patch. I note that performance in IPC loading of very wide record batches is about 15% slower, but in smaller record batches it is about the same in microbenchmarks. This code path could possibly be made faster with some performance analysis work. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #824 from wesm/array-data-internals and squashes the following commits: f1acbae1 [Wes McKinney] MSVC fixes dcdf2b29 [Wes McKinney] Fix glib per C++ API changes d0a8ee2b [Wes McKinney] Fix logic error in UnsafeSetNotNull d17f886c [Wes McKinney] Construct dictionary indices in ctor bba42530 [Wes McKinney] Set correct type when creating BinaryArray ba3b2992 [Wes McKinney] Various fixes, Python fixes, add Array operator<< to std::ostream for debugging 0b8af24a [Wes McKinney] Write field metadata directly into output object 05058638 [Wes McKinney] Fix up cmake 75bc6b4f [Wes McKinney] Delete cruft from array/loader.h and consolidate in arrow/ipc 24df1b97 [Wes McKinney] Review comments, add some doxygen comments 6e2e5720 [Wes McKinney] Preallocate vector of shared_ptr 05b806b2 [Wes McKinney] Tests passing again 5bdd6a99 [Wes McKinney] bug fixes 7894496e [Wes McKinney] Some fixes bf91a75a [Wes McKinney] Refactor to use shared_ptr, not yet working 130f0c1a [Wes McKinney] Use std::move instead of std::forward a9b4031b [Wes McKinney] Add move constructors to reduce unnecessary copying 475a3db6 [Wes McKinney] Bug fixes, test suite passing again 16918279 [Wes McKinney] Array internals refactoring to use POD struct for all buffers, auxiliary metadata
2017-07-11 01:39:20 -04:00
def diff(self, Array other):
"""
Compare contents of this array against another one.
Return a string containing the result of diffing this array
(on the left side) against the other array (on the right side).
Parameters
----------
other : Array
The other array to compare this array with.
Returns
-------
diff : str
A human-readable printout of the differences.
Examples
--------
>>> import pyarrow as pa
>>> left = pa.array(["one", "two", "three"])
>>> right = pa.array(["two", None, "two-and-a-half", "three"])
>>> print(left.diff(right)) # doctest: +SKIP
@@ -0, +0 @@
-"one"
@@ -2, +1 @@
+null
+"two-and-a-half"
"""
self._assert_cpu()
cdef c_string result
with nogil:
result = self.ap.Diff(deref(other.ap))
return frombytes(result, safe=True)
def cast(self, object target_type=None, safe=None, options=None, memory_pool=None):
ARROW-1156: [C++/Python] Expand casting API, add UnaryKernel callable. Use Cast in appropriate places when converting from pandas cc @cloud-fan With this patch we now try to cast to indicated type on ingest of objects from pandas: ``` In [3]: arr = np.array([None] * 5) In [4]: pa.Array.from_pandas(arr) Out[4]: <pyarrow.lib.NullArray object at 0x7f6cf1485d18> [ NA, NA, NA, NA, NA ] In [5]: pa.Array.from_pandas(arr, type=pa.int32()) Out[5]: <pyarrow.lib.Int32Array object at 0x7f6cf1485d68> [ NA, NA, NA, NA, NA ] ``` I also added zero-copy casts from integers of the right size to each of the date and time types. Includes refactoring for ARROW-1481. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1063 from wesm/ARROW-1156 and squashes the following commits: 166d1a50 [Wes McKinney] iwyu 34f5c9d1 [Wes McKinney] Harden default cast options, fix unsafe Python case 1d07b756 [Wes McKinney] Add some basic casting unit tests in Python c1b45709 [Wes McKinney] Expose arrow::compute::Cast in Python as Array.cast. Still need to write tests a9a04c9c [Wes McKinney] UnaryKernel::Call returns Status for now for simplicity. Support pre-allocated memory 8903709b [Wes McKinney] Implement casts from null to numbers. Try to cast for types where we do not have an inference rule when converting from arrays of Python objects a22dd20a [Wes McKinney] Add test to assert zero copy for compatible integer to date/time a14b83f7 [Wes McKinney] Create callable CastKernel object. Add zero-copy cast rules for date/time types
2017-09-08 10:09:38 -04:00
"""
Cast array values to another data type
See :func:`pyarrow.compute.cast` for usage.
Parameters
----------
target_type : DataType, default None
Type to cast array to.
safe : boolean, default True
Whether to check for conversion errors such as overflow.
options : CastOptions, default None
Additional checks pass by CastOptions
memory_pool : MemoryPool, optional
memory pool to use for allocations during function execution.
Returns
-------
cast : Array
ARROW-1156: [C++/Python] Expand casting API, add UnaryKernel callable. Use Cast in appropriate places when converting from pandas cc @cloud-fan With this patch we now try to cast to indicated type on ingest of objects from pandas: ``` In [3]: arr = np.array([None] * 5) In [4]: pa.Array.from_pandas(arr) Out[4]: <pyarrow.lib.NullArray object at 0x7f6cf1485d18> [ NA, NA, NA, NA, NA ] In [5]: pa.Array.from_pandas(arr, type=pa.int32()) Out[5]: <pyarrow.lib.Int32Array object at 0x7f6cf1485d68> [ NA, NA, NA, NA, NA ] ``` I also added zero-copy casts from integers of the right size to each of the date and time types. Includes refactoring for ARROW-1481. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1063 from wesm/ARROW-1156 and squashes the following commits: 166d1a50 [Wes McKinney] iwyu 34f5c9d1 [Wes McKinney] Harden default cast options, fix unsafe Python case 1d07b756 [Wes McKinney] Add some basic casting unit tests in Python c1b45709 [Wes McKinney] Expose arrow::compute::Cast in Python as Array.cast. Still need to write tests a9a04c9c [Wes McKinney] UnaryKernel::Call returns Status for now for simplicity. Support pre-allocated memory 8903709b [Wes McKinney] Implement casts from null to numbers. Try to cast for types where we do not have an inference rule when converting from arrays of Python objects a22dd20a [Wes McKinney] Add test to assert zero copy for compatible integer to date/time a14b83f7 [Wes McKinney] Create callable CastKernel object. Add zero-copy cast rules for date/time types
2017-09-08 10:09:38 -04:00
"""
self._assert_cpu()
return _pc().cast(self, target_type, safe=safe,
options=options, memory_pool=memory_pool)
ARROW-1156: [C++/Python] Expand casting API, add UnaryKernel callable. Use Cast in appropriate places when converting from pandas cc @cloud-fan With this patch we now try to cast to indicated type on ingest of objects from pandas: ``` In [3]: arr = np.array([None] * 5) In [4]: pa.Array.from_pandas(arr) Out[4]: <pyarrow.lib.NullArray object at 0x7f6cf1485d18> [ NA, NA, NA, NA, NA ] In [5]: pa.Array.from_pandas(arr, type=pa.int32()) Out[5]: <pyarrow.lib.Int32Array object at 0x7f6cf1485d68> [ NA, NA, NA, NA, NA ] ``` I also added zero-copy casts from integers of the right size to each of the date and time types. Includes refactoring for ARROW-1481. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1063 from wesm/ARROW-1156 and squashes the following commits: 166d1a50 [Wes McKinney] iwyu 34f5c9d1 [Wes McKinney] Harden default cast options, fix unsafe Python case 1d07b756 [Wes McKinney] Add some basic casting unit tests in Python c1b45709 [Wes McKinney] Expose arrow::compute::Cast in Python as Array.cast. Still need to write tests a9a04c9c [Wes McKinney] UnaryKernel::Call returns Status for now for simplicity. Support pre-allocated memory 8903709b [Wes McKinney] Implement casts from null to numbers. Try to cast for types where we do not have an inference rule when converting from arrays of Python objects a22dd20a [Wes McKinney] Add test to assert zero copy for compatible integer to date/time a14b83f7 [Wes McKinney] Create callable CastKernel object. Add zero-copy cast rules for date/time types
2017-09-08 10:09:38 -04:00
def view(self, object target_type):
"""
Return zero-copy "view" of array as another data type.
The data types must have compatible columnar buffer layouts
Parameters
----------
target_type : DataType
Type to construct view as.
Returns
-------
view : Array
"""
self._assert_cpu()
cdef DataType type = ensure_type(target_type)
cdef shared_ptr[CArray] result
with nogil:
result = GetResultValue(self.ap.View(type.sp_type))
return pyarrow_wrap_array(result)
def sum(self, **kwargs):
"""
Sum the values in a numerical array.
See :func:`pyarrow.compute.sum` for full usage.
Parameters
----------
**kwargs : dict, optional
Options to pass to :func:`pyarrow.compute.sum`.
Returns
-------
sum : Scalar
A scalar containing the sum value.
"""
self._assert_cpu()
options = _pc().ScalarAggregateOptions(**kwargs)
return _pc().call_function('sum', [self], options)
ARROW-1559: [C++] Add Unique kernel and refactor DictionaryBuilder to be a stateful kernel Only intended to implement selective categorical conversion in `to_pandas()` but it seems that there is a lot missing to do this in a clean fashion. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1266 from xhochy/ARROW-1559 and squashes the following commits: 50249652 [Wes McKinney] Fix MSVC linker issue b6cb1ece [Wes McKinney] Export CastOptions 4ea3ce61 [Wes McKinney] Return NONE Datum in else branch of functions 4f969c6b [Wes McKinney] Move deprecation suppression after flag munging 7f557cc0 [Wes McKinney] Code review comments, disable C4996 warning (equivalent to -Wno-deprecated) in MSVC builds 84717461 [Wes McKinney] Do not compute hash table threshold on each iteration ae8f2339 [Wes McKinney] Fix double to int64_t conversion warning c1444a26 [Wes McKinney] Fix doxygen warnings 2de85961 [Wes McKinney] Add test cases for unique, dictionary_encode 383b46fd [Wes McKinney] Add Array methods for Unique, DictionaryEncode 0962f06b [Wes McKinney] Add cast method for Column, chunked_array and column factory functions 62c3cefd [Wes McKinney] Datum stubs 27151c47 [Wes McKinney] Implement Cast for chunked arrays, fix kernel implementation. Change kernel API to write to a single Datum 1bf2e2f4 [Wes McKinney] Fix bug with column using wrong type eaadc3e5 [Wes McKinney] Use macros to reduce code duplication in DoubleTableSize 6b4f8f3c [Wes McKinney] Fix datetime64->date32 casting error raised by refactor 2c77a19e [Wes McKinney] Some Decimal->Decimal128 renaming. Add DecimalType base class c07f91b3 [Wes McKinney] ARROW-1559: Add unique kernel
2017-11-17 18:29:49 -05:00
def unique(self):
"""
Compute distinct elements in array.
Returns
-------
unique : Array
An array of the same data type, with deduplicated elements.
ARROW-1559: [C++] Add Unique kernel and refactor DictionaryBuilder to be a stateful kernel Only intended to implement selective categorical conversion in `to_pandas()` but it seems that there is a lot missing to do this in a clean fashion. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1266 from xhochy/ARROW-1559 and squashes the following commits: 50249652 [Wes McKinney] Fix MSVC linker issue b6cb1ece [Wes McKinney] Export CastOptions 4ea3ce61 [Wes McKinney] Return NONE Datum in else branch of functions 4f969c6b [Wes McKinney] Move deprecation suppression after flag munging 7f557cc0 [Wes McKinney] Code review comments, disable C4996 warning (equivalent to -Wno-deprecated) in MSVC builds 84717461 [Wes McKinney] Do not compute hash table threshold on each iteration ae8f2339 [Wes McKinney] Fix double to int64_t conversion warning c1444a26 [Wes McKinney] Fix doxygen warnings 2de85961 [Wes McKinney] Add test cases for unique, dictionary_encode 383b46fd [Wes McKinney] Add Array methods for Unique, DictionaryEncode 0962f06b [Wes McKinney] Add cast method for Column, chunked_array and column factory functions 62c3cefd [Wes McKinney] Datum stubs 27151c47 [Wes McKinney] Implement Cast for chunked arrays, fix kernel implementation. Change kernel API to write to a single Datum 1bf2e2f4 [Wes McKinney] Fix bug with column using wrong type eaadc3e5 [Wes McKinney] Use macros to reduce code duplication in DoubleTableSize 6b4f8f3c [Wes McKinney] Fix datetime64->date32 casting error raised by refactor 2c77a19e [Wes McKinney] Some Decimal->Decimal128 renaming. Add DecimalType base class c07f91b3 [Wes McKinney] ARROW-1559: Add unique kernel
2017-11-17 18:29:49 -05:00
"""
self._assert_cpu()
ARROW-8792: [C++][Python][R][GLib] New Array compute kernels implementation and execution framework This patch is a major reworking of our development strategy for implementing array-valued functions and applying them in a query processing setting. The design was partly inspired by my previous work designing Ibis (https://github.com/ibis-project/ibis -- the "expr" subsystem and the way that operators validate input types and resolve output types). Using only function names and input types, you can determine the output types of each function and resolve the "execute" function that performs a unit of work processing a batch of data. This will allow us to build deferred column expressions and then (eventually) do parallel execution. There are a ton of details, but one nice thing is that there is now a single API entry point for invoking any function by its name: ```c++ Result<Datum> CallFunction(ExecContext* ctx, const std::string& func_name, const std::vector<Datum>& args, const FunctionOptions* options = NULLPTR); ``` What occurs when you do this: * A `Function` instance is looked up in the global `FunctionRegistry` * Given the descriptors of `args` (their types and shapes -- array or scalar), the Function searches for `Kernel` that is able to process those types and shapes. A kernel might be able to do `array[T0], array[T1]` or only `scalar[T0], scalar[T1]`, for example. This permits kernel specialization to treat different type and shape combinations * The kernel is executed iteratively against `args` based on what `args` contains -- if there are ChunkedArrays, they will be split into contiguous pieces. Kernels never see ChunkedArray, only Array or Scalar * The Executor implementation is able to split contiguous Array inputs into smaller chunks, which is important for parallel execution. See `ExecContext::set_exec_chunksize` To summarize: the REGISTRY contains FUNCTIONS. A FUNCTION contains KERNELS. A KERNEL is a specific implementation of a function that services a particular type combination. An additional effort in this patch is to radically simplify the process of creating kernels that are based on a scalar function. To do this, there is a growing collection of template-based kernel generation classes in compute/kernels/codegen_internal.h that will surely be the topic of much debate. I want to make it a lot easier for people to add new kernels. There are some other incidental changes in the PR, such as changing the convenience APIs like `Cast` to return `Result`. I'm afraid we may have to live with the API breakage unless someone else wants to add backward compatibility code for the old APIs. I have to apologize for making such a large PR. I've been working long hours on this for nearly a month and the process of porting all of our existing functionality and making the unit tests pass caused much iteration in the "framework" part of the code, such that it would have been a huge time drain to review incomplete iterations of the framework that had not been proven to capture the functionality that previously existed in the project. Given the size of this PR and that fact that it completely blocks any work into src/arrow/compute, I don't think we should let this sit unmerged for more than 4 or 5 days, tops. I'm committed to responding to all of your questions and working to address your feedback about the design and improving the documentation and code comments. I tried to leave copious comments to explain my thought process in various places. Feel free to make any and all comments in this PR or in whatever form you like. I don't think that merging should be blocked on stylistic issues. Closes #7240 from wesm/ARROW-8792-kernels-revamp Lead-authored-by: Wes McKinney <wesm+git@apache.org> Co-authored-by: Sutou Kouhei <kou@clear-code.com> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-05-24 09:35:00 -05:00
return _pc().call_function('unique', [self])
ARROW-1559: [C++] Add Unique kernel and refactor DictionaryBuilder to be a stateful kernel Only intended to implement selective categorical conversion in `to_pandas()` but it seems that there is a lot missing to do this in a clean fashion. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1266 from xhochy/ARROW-1559 and squashes the following commits: 50249652 [Wes McKinney] Fix MSVC linker issue b6cb1ece [Wes McKinney] Export CastOptions 4ea3ce61 [Wes McKinney] Return NONE Datum in else branch of functions 4f969c6b [Wes McKinney] Move deprecation suppression after flag munging 7f557cc0 [Wes McKinney] Code review comments, disable C4996 warning (equivalent to -Wno-deprecated) in MSVC builds 84717461 [Wes McKinney] Do not compute hash table threshold on each iteration ae8f2339 [Wes McKinney] Fix double to int64_t conversion warning c1444a26 [Wes McKinney] Fix doxygen warnings 2de85961 [Wes McKinney] Add test cases for unique, dictionary_encode 383b46fd [Wes McKinney] Add Array methods for Unique, DictionaryEncode 0962f06b [Wes McKinney] Add cast method for Column, chunked_array and column factory functions 62c3cefd [Wes McKinney] Datum stubs 27151c47 [Wes McKinney] Implement Cast for chunked arrays, fix kernel implementation. Change kernel API to write to a single Datum 1bf2e2f4 [Wes McKinney] Fix bug with column using wrong type eaadc3e5 [Wes McKinney] Use macros to reduce code duplication in DoubleTableSize 6b4f8f3c [Wes McKinney] Fix datetime64->date32 casting error raised by refactor 2c77a19e [Wes McKinney] Some Decimal->Decimal128 renaming. Add DecimalType base class c07f91b3 [Wes McKinney] ARROW-1559: Add unique kernel
2017-11-17 18:29:49 -05:00
ARROW-10438: [C++][Dataset] Partitioning::Format on nulls Tested and added support for partitioning with nulls. I had to make some changes to the hash kernels. You can now specify how you want DictionaryEncode to treat nulls. The MASK option will continue the current behavior (null not in dictionary, null value in indices) and the ENCODE option will put `null` in the dictionary and there will be no null values in the indices array. Partitioning on nulls will depend on the partitioning scheme. For directory partitioning null is allowed on inner fields but it is not allowed on an outer field if an inner field is defined. In other words, if the schema is a(int32), b(int32), c(int32) then the following are allowed ``` / (a=null, b=null, c=null) /32 (a=32, b=null, c=null) /32/57 (a=32, b=57, c=null) ``` There is no way to specify `a=null, b=57, c=null`. This does mean that partition directories can contain a mix of files and nested partition directories (e.g. /32 might contain file.parquet and the directory /57). Alternatively we could just forbid nulls in the directory partitioning scheme. For the hive scheme we need to be compatible with other tools that read/write hive. Those tools use a fallback value which defaults to `__HIVE_DEFAULT_PARTITION__`. So by default you would have directories that look like... ``` /a=__HIVE_DEFAULT_PARTITION__/b=__HIVE_DEFAULT_PARTITION__/c=__HIVE_DEFAULT_PARTITION__ ``` The null fallback value is configurable as a string passed to HivePartitioning::HivePartitioning or HivePartitioning::MakeFactory. ARROW-11649 has been created for extending this null fallback configuration to R. Closes #9323 from westonpace/feature/arrow-10438 Lead-authored-by: Weston Pace <weston.pace@gmail.com> Co-authored-by: Benjamin Kietzman <bengilgit@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2021-02-24 10:34:31 -05:00
def dictionary_encode(self, null_encoding='mask'):
ARROW-1559: [C++] Add Unique kernel and refactor DictionaryBuilder to be a stateful kernel Only intended to implement selective categorical conversion in `to_pandas()` but it seems that there is a lot missing to do this in a clean fashion. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1266 from xhochy/ARROW-1559 and squashes the following commits: 50249652 [Wes McKinney] Fix MSVC linker issue b6cb1ece [Wes McKinney] Export CastOptions 4ea3ce61 [Wes McKinney] Return NONE Datum in else branch of functions 4f969c6b [Wes McKinney] Move deprecation suppression after flag munging 7f557cc0 [Wes McKinney] Code review comments, disable C4996 warning (equivalent to -Wno-deprecated) in MSVC builds 84717461 [Wes McKinney] Do not compute hash table threshold on each iteration ae8f2339 [Wes McKinney] Fix double to int64_t conversion warning c1444a26 [Wes McKinney] Fix doxygen warnings 2de85961 [Wes McKinney] Add test cases for unique, dictionary_encode 383b46fd [Wes McKinney] Add Array methods for Unique, DictionaryEncode 0962f06b [Wes McKinney] Add cast method for Column, chunked_array and column factory functions 62c3cefd [Wes McKinney] Datum stubs 27151c47 [Wes McKinney] Implement Cast for chunked arrays, fix kernel implementation. Change kernel API to write to a single Datum 1bf2e2f4 [Wes McKinney] Fix bug with column using wrong type eaadc3e5 [Wes McKinney] Use macros to reduce code duplication in DoubleTableSize 6b4f8f3c [Wes McKinney] Fix datetime64->date32 casting error raised by refactor 2c77a19e [Wes McKinney] Some Decimal->Decimal128 renaming. Add DecimalType base class c07f91b3 [Wes McKinney] ARROW-1559: Add unique kernel
2017-11-17 18:29:49 -05:00
"""
Compute dictionary-encoded representation of array.
See :func:`pyarrow.compute.dictionary_encode` for full usage.
Parameters
----------
null_encoding : str, default "mask"
How to handle null entries.
Returns
-------
encoded : DictionaryArray
A dictionary-encoded version of this array.
ARROW-1559: [C++] Add Unique kernel and refactor DictionaryBuilder to be a stateful kernel Only intended to implement selective categorical conversion in `to_pandas()` but it seems that there is a lot missing to do this in a clean fashion. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1266 from xhochy/ARROW-1559 and squashes the following commits: 50249652 [Wes McKinney] Fix MSVC linker issue b6cb1ece [Wes McKinney] Export CastOptions 4ea3ce61 [Wes McKinney] Return NONE Datum in else branch of functions 4f969c6b [Wes McKinney] Move deprecation suppression after flag munging 7f557cc0 [Wes McKinney] Code review comments, disable C4996 warning (equivalent to -Wno-deprecated) in MSVC builds 84717461 [Wes McKinney] Do not compute hash table threshold on each iteration ae8f2339 [Wes McKinney] Fix double to int64_t conversion warning c1444a26 [Wes McKinney] Fix doxygen warnings 2de85961 [Wes McKinney] Add test cases for unique, dictionary_encode 383b46fd [Wes McKinney] Add Array methods for Unique, DictionaryEncode 0962f06b [Wes McKinney] Add cast method for Column, chunked_array and column factory functions 62c3cefd [Wes McKinney] Datum stubs 27151c47 [Wes McKinney] Implement Cast for chunked arrays, fix kernel implementation. Change kernel API to write to a single Datum 1bf2e2f4 [Wes McKinney] Fix bug with column using wrong type eaadc3e5 [Wes McKinney] Use macros to reduce code duplication in DoubleTableSize 6b4f8f3c [Wes McKinney] Fix datetime64->date32 casting error raised by refactor 2c77a19e [Wes McKinney] Some Decimal->Decimal128 renaming. Add DecimalType base class c07f91b3 [Wes McKinney] ARROW-1559: Add unique kernel
2017-11-17 18:29:49 -05:00
"""
self._assert_cpu()
ARROW-10438: [C++][Dataset] Partitioning::Format on nulls Tested and added support for partitioning with nulls. I had to make some changes to the hash kernels. You can now specify how you want DictionaryEncode to treat nulls. The MASK option will continue the current behavior (null not in dictionary, null value in indices) and the ENCODE option will put `null` in the dictionary and there will be no null values in the indices array. Partitioning on nulls will depend on the partitioning scheme. For directory partitioning null is allowed on inner fields but it is not allowed on an outer field if an inner field is defined. In other words, if the schema is a(int32), b(int32), c(int32) then the following are allowed ``` / (a=null, b=null, c=null) /32 (a=32, b=null, c=null) /32/57 (a=32, b=57, c=null) ``` There is no way to specify `a=null, b=57, c=null`. This does mean that partition directories can contain a mix of files and nested partition directories (e.g. /32 might contain file.parquet and the directory /57). Alternatively we could just forbid nulls in the directory partitioning scheme. For the hive scheme we need to be compatible with other tools that read/write hive. Those tools use a fallback value which defaults to `__HIVE_DEFAULT_PARTITION__`. So by default you would have directories that look like... ``` /a=__HIVE_DEFAULT_PARTITION__/b=__HIVE_DEFAULT_PARTITION__/c=__HIVE_DEFAULT_PARTITION__ ``` The null fallback value is configurable as a string passed to HivePartitioning::HivePartitioning or HivePartitioning::MakeFactory. ARROW-11649 has been created for extending this null fallback configuration to R. Closes #9323 from westonpace/feature/arrow-10438 Lead-authored-by: Weston Pace <weston.pace@gmail.com> Co-authored-by: Benjamin Kietzman <bengilgit@gmail.com> Signed-off-by: Benjamin Kietzman <bengilgit@gmail.com>
2021-02-24 10:34:31 -05:00
options = _pc().DictionaryEncodeOptions(null_encoding)
return _pc().call_function('dictionary_encode', [self], options)
ARROW-1559: [C++] Add Unique kernel and refactor DictionaryBuilder to be a stateful kernel Only intended to implement selective categorical conversion in `to_pandas()` but it seems that there is a lot missing to do this in a clean fashion. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1266 from xhochy/ARROW-1559 and squashes the following commits: 50249652 [Wes McKinney] Fix MSVC linker issue b6cb1ece [Wes McKinney] Export CastOptions 4ea3ce61 [Wes McKinney] Return NONE Datum in else branch of functions 4f969c6b [Wes McKinney] Move deprecation suppression after flag munging 7f557cc0 [Wes McKinney] Code review comments, disable C4996 warning (equivalent to -Wno-deprecated) in MSVC builds 84717461 [Wes McKinney] Do not compute hash table threshold on each iteration ae8f2339 [Wes McKinney] Fix double to int64_t conversion warning c1444a26 [Wes McKinney] Fix doxygen warnings 2de85961 [Wes McKinney] Add test cases for unique, dictionary_encode 383b46fd [Wes McKinney] Add Array methods for Unique, DictionaryEncode 0962f06b [Wes McKinney] Add cast method for Column, chunked_array and column factory functions 62c3cefd [Wes McKinney] Datum stubs 27151c47 [Wes McKinney] Implement Cast for chunked arrays, fix kernel implementation. Change kernel API to write to a single Datum 1bf2e2f4 [Wes McKinney] Fix bug with column using wrong type eaadc3e5 [Wes McKinney] Use macros to reduce code duplication in DoubleTableSize 6b4f8f3c [Wes McKinney] Fix datetime64->date32 casting error raised by refactor 2c77a19e [Wes McKinney] Some Decimal->Decimal128 renaming. Add DecimalType base class c07f91b3 [Wes McKinney] ARROW-1559: Add unique kernel
2017-11-17 18:29:49 -05:00
def value_counts(self):
"""
Compute counts of unique elements in array.
Returns
-------
StructArray
An array of <input type "Values", int64 "Counts"> structs
"""
self._assert_cpu()
ARROW-8792: [C++][Python][R][GLib] New Array compute kernels implementation and execution framework This patch is a major reworking of our development strategy for implementing array-valued functions and applying them in a query processing setting. The design was partly inspired by my previous work designing Ibis (https://github.com/ibis-project/ibis -- the "expr" subsystem and the way that operators validate input types and resolve output types). Using only function names and input types, you can determine the output types of each function and resolve the "execute" function that performs a unit of work processing a batch of data. This will allow us to build deferred column expressions and then (eventually) do parallel execution. There are a ton of details, but one nice thing is that there is now a single API entry point for invoking any function by its name: ```c++ Result<Datum> CallFunction(ExecContext* ctx, const std::string& func_name, const std::vector<Datum>& args, const FunctionOptions* options = NULLPTR); ``` What occurs when you do this: * A `Function` instance is looked up in the global `FunctionRegistry` * Given the descriptors of `args` (their types and shapes -- array or scalar), the Function searches for `Kernel` that is able to process those types and shapes. A kernel might be able to do `array[T0], array[T1]` or only `scalar[T0], scalar[T1]`, for example. This permits kernel specialization to treat different type and shape combinations * The kernel is executed iteratively against `args` based on what `args` contains -- if there are ChunkedArrays, they will be split into contiguous pieces. Kernels never see ChunkedArray, only Array or Scalar * The Executor implementation is able to split contiguous Array inputs into smaller chunks, which is important for parallel execution. See `ExecContext::set_exec_chunksize` To summarize: the REGISTRY contains FUNCTIONS. A FUNCTION contains KERNELS. A KERNEL is a specific implementation of a function that services a particular type combination. An additional effort in this patch is to radically simplify the process of creating kernels that are based on a scalar function. To do this, there is a growing collection of template-based kernel generation classes in compute/kernels/codegen_internal.h that will surely be the topic of much debate. I want to make it a lot easier for people to add new kernels. There are some other incidental changes in the PR, such as changing the convenience APIs like `Cast` to return `Result`. I'm afraid we may have to live with the API breakage unless someone else wants to add backward compatibility code for the old APIs. I have to apologize for making such a large PR. I've been working long hours on this for nearly a month and the process of porting all of our existing functionality and making the unit tests pass caused much iteration in the "framework" part of the code, such that it would have been a huge time drain to review incomplete iterations of the framework that had not been proven to capture the functionality that previously existed in the project. Given the size of this PR and that fact that it completely blocks any work into src/arrow/compute, I don't think we should let this sit unmerged for more than 4 or 5 days, tops. I'm committed to responding to all of your questions and working to address your feedback about the design and improving the documentation and code comments. I tried to leave copious comments to explain my thought process in various places. Feel free to make any and all comments in this PR or in whatever form you like. I don't think that merging should be blocked on stylistic issues. Closes #7240 from wesm/ARROW-8792-kernels-revamp Lead-authored-by: Wes McKinney <wesm+git@apache.org> Co-authored-by: Sutou Kouhei <kou@clear-code.com> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-05-24 09:35:00 -05:00
return _pc().call_function('value_counts', [self])
@staticmethod
def from_pandas(obj, mask=None, type=None, bint safe=True,
MemoryPool memory_pool=None):
"""
Convert pandas.Series to an Arrow Array.
This method uses Pandas semantics about what values indicate
nulls. See pyarrow.array for more general conversion from arrays or
sequences to Arrow arrays.
Parameters
----------
obj : ndarray, pandas.Series, array-like
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
mask : array (boolean), optional
Indicate which values are null (True) or not null (False).
type : pyarrow.DataType
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
Explicit type to attempt to coerce to, otherwise will be inferred
from the data.
safe : bool, default True
Check for overflows or other unsafe conversions.
ARROW-838: [Python] Expand pyarrow.array to handle NumPy arrays not originating in pandas This unifies the ingest path for 1D data into `pyarrow.array`. I added the argument `from_pandas` to turn null sentinel checking on or off: ``` In [8]: arr = np.random.randn(10000000) In [9]: arr[::3] = np.nan In [10]: arr2 = pa.array(arr) In [11]: arr2.null_count Out[11]: 0 In [12]: %timeit arr2 = pa.array(arr) The slowest run took 5.43 times longer than the fastest. This could mean that an intermediate result is being cached. 10000 loops, best of 3: 68.4 µs per loop In [13]: arr2 = pa.array(arr, from_pandas=True) In [14]: arr2.null_count Out[14]: 3333334 In [15]: %timeit arr2 = pa.array(arr, from_pandas=True) 1 loop, best of 3: 228 ms per loop ``` When the data is contiguous, it is always zero-copy, but then `from_pandas=True` and no null mask is passed, then a null bitmap is constructed and populated. This also permits sequence reads into integers smaller than int64: ``` In [17]: pa.array([1, 2, 3, 4], type='i1') Out[17]: <pyarrow.lib.Int8Array object at 0x7ffa1c1c65e8> [ 1, 2, 3, 4 ] ``` Oh, I also added NumPy-like string type aliases: ``` In [18]: pa.int32() == 'i4' Out[18]: True ``` Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1146 from wesm/expand-py-array-method and squashes the following commits: 1570e525 [Wes McKinney] Code review comments d3bbb3c3 [Wes McKinney] Handle type aliases in cast, too 797f0151 [Wes McKinney] Allow null checking to be skipped with from_pandas=False in pyarrow.array f2802fc7 [Wes McKinney] Cleaner codepath for numpy->arrow conversions 587c575a [Wes McKinney] Add direct types sequence converters for more data types cf40b767 [Wes McKinney] Add type aliases, some unit tests 7b530e4b [Wes McKinney] Consolidate both sequence and ndarray/Series/Index conversion in pyarrow.Array
2017-09-29 23:02:58 -05:00
memory_pool : pyarrow.MemoryPool, optional
If not passed, will allocate memory from the currently-set default
memory pool.
Notes
-----
Localized timestamps will currently be returned as UTC (pandas's native
representation). Timezone-naive data will be implicitly interpreted as
UTC.
Returns
-------
array : pyarrow.Array or pyarrow.ChunkedArray
ChunkedArray is returned if object data overflows binary buffer.
"""
return array(obj, mask=mask, type=type, safe=safe, from_pandas=True,
memory_pool=memory_pool)
def __reduce__(self):
self._assert_cpu()
return _restore_array, \
(_reduce_array_data(self.sp_array.get().data().get()),)
@staticmethod
def from_buffers(DataType type, length, buffers, null_count=-1, offset=0,
children=None):
"""
Construct an Array from a sequence of buffers.
The concrete type returned depends on the datatype.
Parameters
----------
type : DataType
The value type of the array.
length : int
The number of values in the array.
buffers : List[Buffer | None]
The buffers backing this array.
null_count : int, default -1
The number of null entries in the array. Negative value means that
the null count is not known.
offset : int, default 0
The array's logical offset (in values, not in bytes) from the
start of each buffer.
children : List[Array], default None
Nested type children with length matching type.num_fields.
Returns
-------
array : Array
"""
cdef:
Buffer buf
Array child
vector[shared_ptr[CBuffer]] c_buffers
vector[shared_ptr[CArrayData]] c_child_data
shared_ptr[CArrayData] array_data
children = children or []
if type.num_fields != len(children):
raise ValueError("Type's expected number of children "
f"({type.num_fields}) did not match the passed number "
f"({len(children)})")
if type.has_variadic_buffers:
if type.num_buffers > len(buffers):
raise ValueError("Type's expected number of buffers is at least "
f"{type.num_buffers}, but the passed number is "
f"{len(buffers)}.")
elif type.num_buffers != len(buffers):
raise ValueError("Type's expected number of buffers "
f"({type.num_buffers}) did not match the passed number "
f"({len(buffers)}).")
for buf in buffers:
# None will produce a null buffer pointer
c_buffers.push_back(pyarrow_unwrap_buffer(buf))
for child in children:
c_child_data.push_back(child.ap.data())
array_data = CArrayData.MakeWithChildren(type.sp_type, length,
c_buffers, c_child_data,
null_count, offset)
cdef Array result = pyarrow_wrap_array(MakeArray(array_data))
result.validate()
return result
@property
def null_count(self):
self._assert_cpu()
return self.sp_array.get().null_count()
@property
def nbytes(self):
"""
Total number of bytes consumed by the elements of the array.
In other words, the sum of bytes from all buffer
ranges referenced.
Unlike `get_total_buffer_size` this method will account for array
offsets.
If buffers are shared between arrays then the shared
portion will be counted multiple times.
The dictionary of dictionary arrays will always be counted in their
entirety even if the array only references a portion of the dictionary.
"""
self._assert_cpu()
cdef CResult[int64_t] c_size_res
with nogil:
c_size_res = ReferencedBufferSize(deref(self.ap))
size = GetResultValue(c_size_res)
return size
def get_total_buffer_size(self):
"""
The sum of bytes in each buffer referenced by the array.
An array may only reference a portion of a buffer.
This method will overestimate in this case and return the
byte size of the entire buffer.
If a buffer is referenced multiple times then it will
only be counted once.
"""
self._assert_cpu()
cdef int64_t total_buffer_size
total_buffer_size = TotalBufferSize(deref(self.ap))
return total_buffer_size
def __sizeof__(self):
self._assert_cpu()
return super(Array, self).__sizeof__() + self.nbytes
def __iter__(self):
self._assert_cpu()
for i in range(len(self)):
yield self.getitem(i)
def __repr__(self):
type_format = object.__repr__(self)
return f'{type_format}\n{self}'
def to_string(self, *, int indent=2, int top_level_indent=0, int window=10,
int container_window=2, c_bool skip_new_lines=False,
int element_size_limit=100):
"""
Render a "pretty-printed" string representation of the Array.
Note: for data on a non-CPU device, the full array is copied to CPU
memory.
Parameters
----------
indent : int, default 2
How much to indent the internal items in the string to
the right, by default ``2``.
top_level_indent : int, default 0
How much to indent right the entire content of the array,
by default ``0``.
window : int
ARROW-14798: [C++][Python][R] Add container window to PrettyPrintOptions # Summary This PR makes a few changes to PrettyPrinting to make output shorter, particularly for ChunkedArray and ListArray types. * Introduces `container_window` argument to `PrettyPrinterOptions`, which controls the window for ChunkedArray and ListArray separately from other types. * Modified `PrettyPrinter` to pass down `ChildOptions()` to recursive calls. The main effect of this is that `skip_new_lines` is now passed down to children of StructArrays. It also makes sure that `window` and `container` window are passed down to children. * Modified `ChunkedArray` printer to always put new lines between sub-arrays of StructArray. * Added missing comma in `ChunkedArray` print output after ellipsis. * Changed `MapArray` printer to only indent if being printed on multiple lines. These changes affect the C++, Python, and R implementations. ## Example Here's a little test snippet: ```python from random import sample, choice import pyarrow as pa arr_int = pa.array(range(50)) tree_parts = ["roots", "trunk", "crown", "seeds"] arr_list = pa.array([sample(tree_parts, k=choice(range(len(tree_parts)))) for _ in range(50)]) arr_struct = pa.StructArray.from_arrays([arr_int, arr_list], names=['int_nested', 'list_nested']) arr_map = pa.array( [ [(part, choice(range(10))) for part in sample(tree_parts, k=choice(range(len(tree_parts))))] for _ in range(50) ], type=pa.map_(pa.utf8(), pa.int64()) ) table = pa.table({ 'int': pa.chunked_array([arr_int] * 10), 'list': pa.chunked_array([arr_list] * 10), 'struct': pa.chunked_array([arr_struct] * 10), 'map': pa.chunked_array([arr_map] * 10), }) print(table) ``` <details> <summary> Output Before </summary> ``` pyarrow.Table int: int64 list: list<item: string> child 0, item: string struct: struct<int_nested: int64, list_nested: list<item: string>> child 0, int_nested: int64 child 1, list_nested: list<item: string> child 0, item: string map: map<string, int64> child 0, entries: struct<key: string not null, value: int64> not null child 0, key: string not null child 1, value: int64 ---- int: [[0,1,2,3,4,5,6,7,8,9,...,40,41,42,43,44,45,46,47,48,49],[0,1,2,3,4,5,6,7,8,9,...,40,41,42,43,44,45,46,47,48,49],[0,1,2,3,4,5,6,7,8,9,...,40,41,42,43,44,45,46,47,48,49],[0,1,2,3,4,5,6,7,8,9,...,40,41,42,43,44,45,46,47,48,49],[0,1,2,3,4,5,6,7,8,9,...,40,41,42,43,44,45,46,47,48,49],[0,1,2,3,4,5,6,7,8,9,...,40,41,42,43,44,45,46,47,48,49],[0,1,2,3,4,5,6,7,8,9,...,40,41,42,43,44,45,46,47,48,49],[0,1,2,3,4,5,6,7,8,9,...,40,41,42,43,44,45,46,47,48,49],[0,1,2,3,4,5,6,7,8,9,...,40,41,42,43,44,45,46,47,48,49],[0,1,2,3,4,5,6,7,8,9,...,40,41,42,43,44,45,46,47,48,49]] list: [[["roots","trunk"],["trunk","crown","roots"],["crown","seeds"],["trunk"],[],["crown"],["seeds","crown"],["seeds","roots","trunk"],["roots"],["crown"],...,["trunk","seeds","crown"],["roots","crown","trunk"],["roots"],["crown","trunk","roots"],["crown"],["crown"],["trunk"],["seeds","crown","roots"],[],["trunk","roots"]],[["roots","trunk"],["trunk","crown","roots"],["crown","seeds"],["trunk"],[],["crown"],["seeds","crown"],["seeds","roots","trunk"],["roots"],["crown"],...,["trunk","seeds","crown"],["roots","crown","trunk"],["roots"],["crown","trunk","roots"],["crown"],["crown"],["trunk"],["seeds","crown","roots"],[],["trunk","roots"]],[["roots","trunk"],["trunk","crown","roots"],["crown","seeds"],["trunk"],[],["crown"],["seeds","crown"],["seeds","roots","trunk"],["roots"],["crown"],...,["trunk","seeds","crown"],["roots","crown","trunk"],["roots"],["crown","trunk","roots"],["crown"],["crown"],["trunk"],["seeds","crown","roots"],[],["trunk","roots"]],[["roots","trunk"],["trunk","crown","roots"],["crown","seeds"],["trunk"],[],["crown"],["seeds","crown"],["seeds","roots","trunk"],["roots"],["crown"],...,["trunk","seeds","crown"],["roots","crown","trunk"],["roots"],["crown","trunk","roots"],["crown"],["crown"],["trunk"],["seeds","crown","roots"],[],["trunk","roots"]],[["roots","trunk"],["trunk","crown","roots"],["crown","seeds"],["trunk"],[],["crown"],["seeds","crown"],["seeds","roots","trunk"],["roots"],["crown"],...,["trunk","seeds","crown"],["roots","crown","trunk"],["roots"],["crown","trunk","roots"],["crown"],["crown"],["trunk"],["seeds","crown","roots"],[],["trunk","roots"]],[["roots","trunk"],["trunk","crown","roots"],["crown","seeds"],["trunk"],[],["crown"],["seeds","crown"],["seeds","roots","trunk"],["roots"],["crown"],...,["trunk","seeds","crown"],["roots","crown","trunk"],["roots"],["crown","trunk","roots"],["crown"],["crown"],["trunk"],["seeds","crown","roots"],[],["trunk","roots"]],[["roots","trunk"],["trunk","crown","roots"],["crown","seeds"],["trunk"],[],["crown"],["seeds","crown"],["seeds","roots","trunk"],["roots"],["crown"],...,["trunk","seeds","crown"],["roots","crown","trunk"],["roots"],["crown","trunk","roots"],["crown"],["crown"],["trunk"],["seeds","crown","roots"],[],["trunk","roots"]],[["roots","trunk"],["trunk","crown","roots"],["crown","seeds"],["trunk"],[],["crown"],["seeds","crown"],["seeds","roots","trunk"],["roots"],["crown"],...,["trunk","seeds","crown"],["roots","crown","trunk"],["roots"],["crown","trunk","roots"],["crown"],["crown"],["trunk"],["seeds","crown","roots"],[],["trunk","roots"]],[["roots","trunk"],["trunk","crown","roots"],["crown","seeds"],["trunk"],[],["crown"],["seeds","crown"],["seeds","roots","trunk"],["roots"],["crown"],...,["trunk","seeds","crown"],["roots","crown","trunk"],["roots"],["crown","trunk","roots"],["crown"],["crown"],["trunk"],["seeds","crown","roots"],[],["trunk","roots"]],[["roots","trunk"],["trunk","crown","roots"],["crown","seeds"],["trunk"],[],["crown"],["seeds","crown"],["seeds","roots","trunk"],["roots"],["crown"],...,["trunk","seeds","crown"],["roots","crown","trunk"],["roots"],["crown","trunk","roots"],["crown"],["crown"],["trunk"],["seeds","crown","roots"],[],["trunk","roots"]]] struct: [ -- is_valid: all not null -- child 0 type: int64 [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 40, 41, 42, 43, 44, 45, 46, 47, 48, 49 ] -- child 1 type: list<item: string> [ [ "roots", "trunk" ], [ "trunk", "crown", "roots" ], [ "crown", "seeds" ], [ "trunk" ], [], [ "crown" ], [ "seeds", "crown" ], [ "seeds", "roots", "trunk" ], [ "roots" ], [ "crown" ], ... [ "trunk", "seeds", "crown" ], [ "roots", "crown", "trunk" ], [ "roots" ], [ "crown", "trunk", "roots" ], [ "crown" ], [ "crown" ], [ "trunk" ], [ "seeds", "crown", "roots" ], [], [ "trunk", "roots" ] ], -- is_valid: all not null -- child 0 type: int64 [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 40, 41, 42, 43, 44, 45, 46, 47, 48, 49 ] -- child 1 type: list<item: string> [ [ "roots", "trunk" ], [ "trunk", "crown", "roots" ], [ "crown", "seeds" ], [ "trunk" ], [], [ "crown" ], [ "seeds", "crown" ], [ "seeds", "roots", "trunk" ], [ "roots" ], [ "crown" ], ... [ "trunk", "seeds", "crown" ], [ "roots", "crown", "trunk" ], [ "roots" ], [ "crown", "trunk", "roots" ], [ "crown" ], [ "crown" ], [ "trunk" ], [ "seeds", "crown", "roots" ], [], [ "trunk", "roots" ] ], -- is_valid: all not null -- child 0 type: int64 [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 40, 41, 42, 43, 44, 45, 46, 47, 48, 49 ] -- child 1 type: list<item: string> [ [ "roots", "trunk" ], [ "trunk", "crown", "roots" ], [ "crown", "seeds" ], [ "trunk" ], [], [ "crown" ], [ "seeds", "crown" ], [ "seeds", "roots", "trunk" ], [ "roots" ], [ "crown" ], ... [ "trunk", "seeds", "crown" ], [ "roots", "crown", "trunk" ], [ "roots" ], [ "crown", "trunk", "roots" ], [ "crown" ], [ "crown" ], [ "trunk" ], [ "seeds", "crown", "roots" ], [], [ "trunk", "roots" ] ], -- is_valid: all not null -- child 0 type: int64 [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 40, 41, 42, 43, 44, 45, 46, 47, 48, 49 ] -- child 1 type: list<item: string> [ [ "roots", "trunk" ], [ "trunk", "crown", "roots" ], [ "crown", "seeds" ], [ "trunk" ], [], [ "crown" ], [ "seeds", "crown" ], [ "seeds", "roots", "trunk" ], [ "roots" ], [ "crown" ], ... [ "trunk", "seeds", "crown" ], [ "roots", "crown", "trunk" ], [ "roots" ], [ "crown", "trunk", "roots" ], [ "crown" ], [ "crown" ], [ "trunk" ], [ "seeds", "crown", "roots" ], [], [ "trunk", "roots" ] ], -- is_valid: all not null -- child 0 type: int64 [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 40, 41, 42, 43, 44, 45, 46, 47, 48, 49 ] -- child 1 type: list<item: string> [ [ "roots", "trunk" ], [ "trunk", "crown", "roots" ], [ "crown", "seeds" ], [ "trunk" ], [], [ "crown" ], [ "seeds", "crown" ], [ "seeds", "roots", "trunk" ], [ "roots" ], [ "crown" ], ... [ "trunk", "seeds", "crown" ], [ "roots", "crown", "trunk" ], [ "roots" ], [ "crown", "trunk", "roots" ], [ "crown" ], [ "crown" ], [ "trunk" ], [ "seeds", "crown", "roots" ], [], [ "trunk", "roots" ] ], -- is_valid: all not null -- child 0 type: int64 [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 40, 41, 42, 43, 44, 45, 46, 47, 48, 49 ] -- child 1 type: list<item: string> [ [ "roots", "trunk" ], [ "trunk", "crown", "roots" ], [ "crown", "seeds" ], [ "trunk" ], [], [ "crown" ], [ "seeds", "crown" ], [ "seeds", "roots", "trunk" ], [ "roots" ], [ "crown" ], ... [ "trunk", "seeds", "crown" ], [ "roots", "crown", "trunk" ], [ "roots" ], [ "crown", "trunk", "roots" ], [ "crown" ], [ "crown" ], [ "trunk" ], [ "seeds", "crown", "roots" ], [], [ "trunk", "roots" ] ], -- is_valid: all not null -- child 0 type: int64 [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 40, 41, 42, 43, 44, 45, 46, 47, 48, 49 ] -- child 1 type: list<item: string> [ [ "roots", "trunk" ], [ "trunk", "crown", "roots" ], [ "crown", "seeds" ], [ "trunk" ], [], [ "crown" ], [ "seeds", "crown" ], [ "seeds", "roots", "trunk" ], [ "roots" ], [ "crown" ], ... [ "trunk", "seeds", "crown" ], [ "roots", "crown", "trunk" ], [ "roots" ], [ "crown", "trunk", "roots" ], [ "crown" ], [ "crown" ], [ "trunk" ], [ "seeds", "crown", "roots" ], [], [ "trunk", "roots" ] ], -- is_valid: all not null -- child 0 type: int64 [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 40, 41, 42, 43, 44, 45, 46, 47, 48, 49 ] -- child 1 type: list<item: string> [ [ "roots", "trunk" ], [ "trunk", "crown", "roots" ], [ "crown", "seeds" ], [ "trunk" ], [], [ "crown" ], [ "seeds", "crown" ], [ "seeds", "roots", "trunk" ], [ "roots" ], [ "crown" ], ... [ "trunk", "seeds", "crown" ], [ "roots", "crown", "trunk" ], [ "roots" ], [ "crown", "trunk", "roots" ], [ "crown" ], [ "crown" ], [ "trunk" ], [ "seeds", "crown", "roots" ], [], [ "trunk", "roots" ] ], -- is_valid: all not null -- child 0 type: int64 [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 40, 41, 42, 43, 44, 45, 46, 47, 48, 49 ] -- child 1 type: list<item: string> [ [ "roots", "trunk" ], [ "trunk", "crown", "roots" ], [ "crown", "seeds" ], [ "trunk" ], [], [ "crown" ], [ "seeds", "crown" ], [ "seeds", "roots", "trunk" ], [ "roots" ], [ "crown" ], ... [ "trunk", "seeds", "crown" ], [ "roots", "crown", "trunk" ], [ "roots" ], [ "crown", "trunk", "roots" ], [ "crown" ], [ "crown" ], [ "trunk" ], [ "seeds", "crown", "roots" ], [], [ "trunk", "roots" ] ], -- is_valid: all not null -- child 0 type: int64 [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 40, 41, 42, 43, 44, 45, 46, 47, 48, 49 ] -- child 1 type: list<item: string> [ [ "roots", "trunk" ], [ "trunk", "crown", "roots" ], [ "crown", "seeds" ], [ "trunk" ], [], [ "crown" ], [ "seeds", "crown" ], [ "seeds", "roots", "trunk" ], [ "roots" ], [ "crown" ], ... [ "trunk", "seeds", "crown" ], [ "roots", "crown", "trunk" ], [ "roots" ], [ "crown", "trunk", "roots" ], [ "crown" ], [ "crown" ], [ "trunk" ], [ "seeds", "crown", "roots" ], [], [ "trunk", "roots" ] ]] map: [[ keys:["crown"]values:[4], keys:["seeds"]values:[7], keys:["trunk"]values:[7], keys:["roots","trunk","crown"]values:[4,8,0], keys:["crown","trunk","roots"]values:[3,6,8], keys:["crown","trunk","seeds"]values:[9,3,2], keys:["crown","seeds","roots"]values:[1,3,8], keys:["trunk","seeds"]values:[3,1], keys:[]values:[], keys:["roots","seeds","trunk"]values:[0,8,2],..., keys:[]values:[], keys:["trunk","crown","roots"]values:[7,2,8], keys:["seeds","trunk"]values:[9,5], keys:["trunk"]values:[7], keys:["roots"]values:[1], keys:["crown"]values:[5], keys:["crown","seeds","roots"]values:[2,7,2], keys:[]values:[], keys:[]values:[], keys:["roots","crown","trunk"]values:[2,1,5]],[ keys:["crown"]values:[4], keys:["seeds"]values:[7], keys:["trunk"]values:[7], keys:["roots","trunk","crown"]values:[4,8,0], keys:["crown","trunk","roots"]values:[3,6,8], keys:["crown","trunk","seeds"]values:[9,3,2], keys:["crown","seeds","roots"]values:[1,3,8], keys:["trunk","seeds"]values:[3,1], keys:[]values:[], keys:["roots","seeds","trunk"]values:[0,8,2],..., keys:[]values:[], keys:["trunk","crown","roots"]values:[7,2,8], keys:["seeds","trunk"]values:[9,5], keys:["trunk"]values:[7], keys:["roots"]values:[1], keys:["crown"]values:[5], keys:["crown","seeds","roots"]values:[2,7,2], keys:[]values:[], keys:[]values:[], keys:["roots","crown","trunk"]values:[2,1,5]],[ keys:["crown"]values:[4], keys:["seeds"]values:[7], keys:["trunk"]values:[7], keys:["roots","trunk","crown"]values:[4,8,0], keys:["crown","trunk","roots"]values:[3,6,8], keys:["crown","trunk","seeds"]values:[9,3,2], keys:["crown","seeds","roots"]values:[1,3,8], keys:["trunk","seeds"]values:[3,1], keys:[]values:[], keys:["roots","seeds","trunk"]values:[0,8,2],..., keys:[]values:[], keys:["trunk","crown","roots"]values:[7,2,8], keys:["seeds","trunk"]values:[9,5], keys:["trunk"]values:[7], keys:["roots"]values:[1], keys:["crown"]values:[5], keys:["crown","seeds","roots"]values:[2,7,2], keys:[]values:[], keys:[]values:[], keys:["roots","crown","trunk"]values:[2,1,5]],[ keys:["crown"]values:[4], keys:["seeds"]values:[7], keys:["trunk"]values:[7], keys:["roots","trunk","crown"]values:[4,8,0], keys:["crown","trunk","roots"]values:[3,6,8], keys:["crown","trunk","seeds"]values:[9,3,2], keys:["crown","seeds","roots"]values:[1,3,8], keys:["trunk","seeds"]values:[3,1], keys:[]values:[], keys:["roots","seeds","trunk"]values:[0,8,2],..., keys:[]values:[], keys:["trunk","crown","roots"]values:[7,2,8], keys:["seeds","trunk"]values:[9,5], keys:["trunk"]values:[7], keys:["roots"]values:[1], keys:["crown"]values:[5], keys:["crown","seeds","roots"]values:[2,7,2], keys:[]values:[], keys:[]values:[], keys:["roots","crown","trunk"]values:[2,1,5]],[ keys:["crown"]values:[4], keys:["seeds"]values:[7], keys:["trunk"]values:[7], keys:["roots","trunk","crown"]values:[4,8,0], keys:["crown","trunk","roots"]values:[3,6,8], keys:["crown","trunk","seeds"]values:[9,3,2], keys:["crown","seeds","roots"]values:[1,3,8], keys:["trunk","seeds"]values:[3,1], keys:[]values:[], keys:["roots","seeds","trunk"]values:[0,8,2],..., keys:[]values:[], keys:["trunk","crown","roots"]values:[7,2,8], keys:["seeds","trunk"]values:[9,5], keys:["trunk"]values:[7], keys:["roots"]values:[1], keys:["crown"]values:[5], keys:["crown","seeds","roots"]values:[2,7,2], keys:[]values:[], keys:[]values:[], keys:["roots","crown","trunk"]values:[2,1,5]],[ keys:["crown"]values:[4], keys:["seeds"]values:[7], keys:["trunk"]values:[7], keys:["roots","trunk","crown"]values:[4,8,0], keys:["crown","trunk","roots"]values:[3,6,8], keys:["crown","trunk","seeds"]values:[9,3,2], keys:["crown","seeds","roots"]values:[1,3,8], keys:["trunk","seeds"]values:[3,1], keys:[]values:[], keys:["roots","seeds","trunk"]values:[0,8,2],..., keys:[]values:[], keys:["trunk","crown","roots"]values:[7,2,8], keys:["seeds","trunk"]values:[9,5], keys:["trunk"]values:[7], keys:["roots"]values:[1], keys:["crown"]values:[5], keys:["crown","seeds","roots"]values:[2,7,2], keys:[]values:[], keys:[]values:[], keys:["roots","crown","trunk"]values:[2,1,5]],[ keys:["crown"]values:[4], keys:["seeds"]values:[7], keys:["trunk"]values:[7], keys:["roots","trunk","crown"]values:[4,8,0], keys:["crown","trunk","roots"]values:[3,6,8], keys:["crown","trunk","seeds"]values:[9,3,2], keys:["crown","seeds","roots"]values:[1,3,8], keys:["trunk","seeds"]values:[3,1], keys:[]values:[], keys:["roots","seeds","trunk"]values:[0,8,2],..., keys:[]values:[], keys:["trunk","crown","roots"]values:[7,2,8], keys:["seeds","trunk"]values:[9,5], keys:["trunk"]values:[7], keys:["roots"]values:[1], keys:["crown"]values:[5], keys:["crown","seeds","roots"]values:[2,7,2], keys:[]values:[], keys:[]values:[], keys:["roots","crown","trunk"]values:[2,1,5]],[ keys:["crown"]values:[4], keys:["seeds"]values:[7], keys:["trunk"]values:[7], keys:["roots","trunk","crown"]values:[4,8,0], keys:["crown","trunk","roots"]values:[3,6,8], keys:["crown","trunk","seeds"]values:[9,3,2], keys:["crown","seeds","roots"]values:[1,3,8], keys:["trunk","seeds"]values:[3,1], keys:[]values:[], keys:["roots","seeds","trunk"]values:[0,8,2],..., keys:[]values:[], keys:["trunk","crown","roots"]values:[7,2,8], keys:["seeds","trunk"]values:[9,5], keys:["trunk"]values:[7], keys:["roots"]values:[1], keys:["crown"]values:[5], keys:["crown","seeds","roots"]values:[2,7,2], keys:[]values:[], keys:[]values:[], keys:["roots","crown","trunk"]values:[2,1,5]],[ keys:["crown"]values:[4], keys:["seeds"]values:[7], keys:["trunk"]values:[7], keys:["roots","trunk","crown"]values:[4,8,0], keys:["crown","trunk","roots"]values:[3,6,8], keys:["crown","trunk","seeds"]values:[9,3,2], keys:["crown","seeds","roots"]values:[1,3,8], keys:["trunk","seeds"]values:[3,1], keys:[]values:[], keys:["roots","seeds","trunk"]values:[0,8,2],..., keys:[]values:[], keys:["trunk","crown","roots"]values:[7,2,8], keys:["seeds","trunk"]values:[9,5], keys:["trunk"]values:[7], keys:["roots"]values:[1], keys:["crown"]values:[5], keys:["crown","seeds","roots"]values:[2,7,2], keys:[]values:[], keys:[]values:[], keys:["roots","crown","trunk"]values:[2,1,5]],[ keys:["crown"]values:[4], keys:["seeds"]values:[7], keys:["trunk"]values:[7], keys:["roots","trunk","crown"]values:[4,8,0], keys:["crown","trunk","roots"]values:[3,6,8], keys:["crown","trunk","seeds"]values:[9,3,2], keys:["crown","seeds","roots"]values:[1,3,8], keys:["trunk","seeds"]values:[3,1], keys:[]values:[], keys:["roots","seeds","trunk"]values:[0,8,2],..., keys:[]values:[], keys:["trunk","crown","roots"]values:[7,2,8], keys:["seeds","trunk"]values:[9,5], keys:["trunk"]values:[7], keys:["roots"]values:[1], keys:["crown"]values:[5], keys:["crown","seeds","roots"]values:[2,7,2], keys:[]values:[], keys:[]values:[], keys:["roots","crown","trunk"]values:[2,1,5]]] ``` </details> <details open> <summary> Output after </summary> ``` pyarrow.Table int: int64 list: list<item: string> child 0, item: string struct: struct<int_nested: int64, list_nested: list<item: string>> child 0, int_nested: int64 child 1, list_nested: list<item: string> child 0, item: string map: map<string, int64> child 0, entries: struct<key: string not null, value: int64> not null child 0, key: string not null child 1, value: int64 ---- int: [[0,1,2,3,4,...,45,46,47,48,49],[0,1,2,3,4,...,45,46,47,48,49],...,[0,1,2,3,4,...,45,46,47,48,49],[0,1,2,3,4,...,45,46,47,48,49]] list: [[["crown","trunk","roots"],["roots","seeds"],...,[],["crown"]],[["crown","trunk","roots"],["roots","seeds"],...,[],["crown"]],...,[["crown","trunk","roots"],["roots","seeds"],...,[],["crown"]],[["crown","trunk","roots"],["roots","seeds"],...,[],["crown"]]] struct: [ -- is_valid: all not null -- child 0 type: int64 [0,1,2,3,4,...,45,46,47,48,49] -- child 1 type: list<item: string> [["crown","trunk","roots"],["roots","seeds"],...,[],["crown"]], -- is_valid: all not null -- child 0 type: int64 [0,1,2,3,4,...,45,46,47,48,49] -- child 1 type: list<item: string> [["crown","trunk","roots"],["roots","seeds"],...,[],["crown"]], ..., -- is_valid: all not null -- child 0 type: int64 [0,1,2,3,4,...,45,46,47,48,49] -- child 1 type: list<item: string> [["crown","trunk","roots"],["roots","seeds"],...,[],["crown"]], -- is_valid: all not null -- child 0 type: int64 [0,1,2,3,4,...,45,46,47,48,49] -- child 1 type: list<item: string> [["crown","trunk","roots"],["roots","seeds"],...,[],["crown"]]] map: [[keys:["trunk"]values:[2],keys:["seeds","roots"]values:[2,4],keys:["trunk","crown"]values:[2,7],keys:["trunk","crown","roots"]values:[8,8,0],keys:[]values:[],...,keys:["trunk","roots"]values:[2,8],keys:["trunk","crown"]values:[6,9],keys:[]values:[],keys:["seeds","trunk"]values:[9,6],keys:["crown","roots","trunk"]values:[0,3,9]],[keys:["trunk"]values:[2],keys:["seeds","roots"]values:[2,4],keys:["trunk","crown"]values:[2,7],keys:["trunk","crown","roots"]values:[8,8,0],keys:[]values:[],...,keys:["trunk","roots"]values:[2,8],keys:["trunk","crown"]values:[6,9],keys:[]values:[],keys:["seeds","trunk"]values:[9,6],keys:["crown","roots","trunk"]values:[0,3,9]],...,[keys:["trunk"]values:[2],keys:["seeds","roots"]values:[2,4],keys:["trunk","crown"]values:[2,7],keys:["trunk","crown","roots"]values:[8,8,0],keys:[]values:[],...,keys:["trunk","roots"]values:[2,8],keys:["trunk","crown"]values:[6,9],keys:[]values:[],keys:["seeds","trunk"]values:[9,6],keys:["crown","roots","trunk"]values:[0,3,9]],[keys:["trunk"]values:[2],keys:["seeds","roots"]values:[2,4],keys:["trunk","crown"]values:[2,7],keys:["trunk","crown","roots"]values:[8,8,0],keys:[]values:[],...,keys:["trunk","roots"]values:[2,8],keys:["trunk","crown"]values:[6,9],keys:[]values:[],keys:["seeds","trunk"]values:[9,6],keys:["crown","roots","trunk"]values:[0,3,9]]] ``` </details> Closes #12091 from wjones127/ARROW-14798-repr-child-limit Lead-authored-by: Will Jones <willjones127@gmail.com> Co-authored-by: Antoine Pitrou <pitrou@free.fr> Signed-off-by: Antoine Pitrou <antoine@python.org>
2022-02-24 18:21:55 +01:00
How many primitive items to preview at the begin and end
of the array when the array is bigger than the window.
The other items will be ellipsed.
container_window : int
How many container items (such as a list in a list array)
to preview at the begin and end of the array when the array
is bigger than the window.
skip_new_lines : bool
If the array should be rendered as a single line of text
or if each element should be on its own line.
element_size_limit : int, default 100
Maximum number of characters of a single element before it is truncated.
"""
cdef:
c_string result
PrettyPrintOptions options
with nogil:
options = PrettyPrintOptions(top_level_indent, window)
options.skip_new_lines = skip_new_lines
options.indent_size = indent
options.element_size_limit = element_size_limit
check_status(
PrettyPrint(
deref(self.ap),
options,
&result
)
)
return frombytes(result, safe=True)
def __str__(self):
return self.to_string()
def __eq__(self, other):
try:
return self.equals(other)
except TypeError:
# This also handles comparing with None
# as Array.equals(None) raises a TypeError.
return NotImplemented
def equals(Array self, Array other not None):
"""
Parameters
----------
other : pyarrow.Array
Returns
-------
bool
"""
self._assert_cpu()
other._assert_cpu()
return self.ap.Equals(deref(other.ap))
def __len__(self):
return self.length()
cdef int64_t length(self):
if self.sp_array.get():
return self.sp_array.get().length()
else:
return 0
def is_null(self, *, nan_is_null=False):
"""
Return BooleanArray indicating the null values.
Parameters
----------
nan_is_null : bool (optional, default False)
Whether floating-point NaN values should also be considered null.
Returns
-------
array : boolean Array
"""
self._assert_cpu()
options = _pc().NullOptions(nan_is_null=nan_is_null)
return _pc().call_function('is_null', [self], options)
def is_nan(self):
"""
Return BooleanArray indicating the NaN values.
Returns
-------
array : boolean Array
"""
self._assert_cpu()
return _pc().call_function('is_nan', [self])
def is_valid(self):
"""
Return BooleanArray indicating the non-null values.
"""
self._assert_cpu()
return _pc().is_valid(self)
def fill_null(self, fill_value):
"""
See :func:`pyarrow.compute.fill_null` for usage.
Parameters
----------
fill_value : any
The replacement value for null entries.
Returns
-------
result : Array
A new array with nulls replaced by the given value.
"""
self._assert_cpu()
return _pc().fill_null(self, fill_value)
def __getitem__(self, key):
"""
Slice or return value at given index
Parameters
----------
key : integer or slice
Slices with step not equal to 1 (or None) will produce a copy
rather than a zero-copy view
Returns
-------
value : Scalar (index) or Array (slice)
"""
self._assert_cpu()
if isinstance(key, slice):
return _normalize_slice(self, key)
return self.getitem(_normalize_index(key, self.length()))
cdef getitem(self, int64_t i):
self._assert_cpu()
return Scalar.wrap(GetResultValue(self.ap.GetScalar(i)))
def slice(self, offset=0, length=None):
"""
Compute zero-copy slice of this array.
Parameters
----------
offset : int, default 0
Offset from start of array to slice.
length : int, default None
Length of slice (default is until end of Array starting from
offset).
Returns
-------
sliced : Array
An array with the same datatype, containing the sliced values.
"""
cdef shared_ptr[CArray] result
if offset < 0:
raise IndexError('Offset must be non-negative')
offset = min(len(self), offset)
if length is None:
result = self.ap.Slice(offset)
else:
if length < 0:
raise ValueError('Length must be non-negative')
result = self.ap.Slice(offset, length)
return pyarrow_wrap_array(result)
def take(self, object indices):
"""
Select values from an array.
See :func:`pyarrow.compute.take` for full usage.
Parameters
----------
indices : Array or array-like
The indices in the array whose values will be returned.
Returns
-------
taken : Array
An array with the same datatype, containing the taken values.
"""
self._assert_cpu()
return _pc().take(self, indices)
def drop_null(self):
"""
Remove missing values from an array.
"""
self._assert_cpu()
return _pc().drop_null(self)
def filter(self, object mask, *, null_selection_behavior='drop'):
"""
Select values from an array.
See :func:`pyarrow.compute.filter` for full usage.
Parameters
----------
mask : Array or array-like
The boolean mask to filter the array with.
null_selection_behavior : str, default "drop"
How nulls in the mask should be handled.
Returns
-------
filtered : Array
An array of the same type, with only the elements selected by
the boolean mask.
"""
self._assert_cpu()
return _pc().filter(self, mask,
null_selection_behavior=null_selection_behavior)
def index(self, value, start=None, end=None, *, memory_pool=None):
"""
Find the first index of a value.
See :func:`pyarrow.compute.index` for full usage.
Parameters
----------
value : Scalar or object
The value to look for in the array.
start : int, optional
The start index where to look for `value`.
end : int, optional
The end index where to look for `value`.
memory_pool : MemoryPool, optional
A memory pool for potential memory allocations.
Returns
-------
index : Int64Scalar
The index of the value in the array (-1 if not found).
"""
self._assert_cpu()
return _pc().index(self, value, start, end, memory_pool=memory_pool)
def sort(self, order="ascending", **kwargs):
"""
Sort the Array
Parameters
----------
order : str, default "ascending"
Which order to sort values in.
Accepted values are "ascending", "descending".
**kwargs : dict, optional
Additional sorting options.
As allowed by :class:`SortOptions`
Returns
-------
result : Array
"""
self._assert_cpu()
indices = _pc().sort_indices(
self,
options=_pc().SortOptions(sort_keys=[("", order)], **kwargs)
)
return self.take(indices)
def _to_pandas(self, options, types_mapper=None, **kwargs):
self._assert_cpu()
return _array_like_to_pandas(self, options, types_mapper=types_mapper)
def __array__(self, dtype=None, copy=None):
self._assert_cpu()
if copy is False:
try:
values = self.to_numpy(zero_copy_only=True)
except ArrowInvalid:
raise ValueError(
"Unable to avoid a copy while creating a numpy array as requested.\n"
"If using `np.array(obj, copy=False)` replace it with "
"`np.asarray(obj)` to allow a copy when needed"
)
# values is already a numpy array at this point, but calling np.array(..)
# again to handle the `dtype` keyword with a no-copy guarantee
return np.array(values, dtype=dtype, copy=False)
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
values = self.to_numpy(zero_copy_only=False)
if copy is True and is_numeric(self.type.id) and self.null_count == 0:
# to_numpy did not yet make a copy (is_numeric = integer/floats, no decimal)
return np.array(values, dtype=dtype, copy=True)
if dtype is None:
return values
return np.asarray(values, dtype=dtype)
ARROW-6749: [Python] Let Array.to_numpy use general conversion code with zero_copy_only=True `Array.to_numpy` converts to a numpy array zero-copy. It currently does that with a custom `np.frombuffer` (although with a bug for timestamp data, which was the original report in [ARROW-6749](https://issues.apache.org/jira/browse/ARROW-6749)), while we also have the `zero_copy_only` guarantee in the arrow->python conversion code. So here I try to switch to that. - I added a zero_copy conversion for Timestamp/Duration. I *think* this can correctly be done since the memory layout for the actual values is identical with numpy (not sure if there is a specific reason it was not done before) - One consequence of using the conversion code is that the resulting numpy array is non-writable. While the current `to_numpy` created a writable array (and the tests actually used this property to check the zero-copy assumption, which is why tests are now failing). Are we OK with that restriction? Closes #5718 from jorisvandenbossche/ARROW-6749-to_numpy-datetimes-zero-copy and squashes the following commits: 1e0c5a7cf <Joris Van den Bossche> lint 5e723f307 <Joris Van den Bossche> update for feedback a4f4c4517 <Joris Van den Bossche> fix pandas tests c9161df9b <Joris Van den Bossche> add zero_copy_only and writable keywords to to_numpy a32070653 <Joris Van den Bossche> ARROW-6749: Let Array.to_numpy use general conversion code with zero_copy_only=True Authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-11-14 18:18:52 +01:00
def to_numpy(self, zero_copy_only=True, writable=False):
"""
Return a NumPy view or copy of this array.
ARROW-6749: [Python] Let Array.to_numpy use general conversion code with zero_copy_only=True `Array.to_numpy` converts to a numpy array zero-copy. It currently does that with a custom `np.frombuffer` (although with a bug for timestamp data, which was the original report in [ARROW-6749](https://issues.apache.org/jira/browse/ARROW-6749)), while we also have the `zero_copy_only` guarantee in the arrow->python conversion code. So here I try to switch to that. - I added a zero_copy conversion for Timestamp/Duration. I *think* this can correctly be done since the memory layout for the actual values is identical with numpy (not sure if there is a specific reason it was not done before) - One consequence of using the conversion code is that the resulting numpy array is non-writable. While the current `to_numpy` created a writable array (and the tests actually used this property to check the zero-copy assumption, which is why tests are now failing). Are we OK with that restriction? Closes #5718 from jorisvandenbossche/ARROW-6749-to_numpy-datetimes-zero-copy and squashes the following commits: 1e0c5a7cf <Joris Van den Bossche> lint 5e723f307 <Joris Van den Bossche> update for feedback a4f4c4517 <Joris Van den Bossche> fix pandas tests c9161df9b <Joris Van den Bossche> add zero_copy_only and writable keywords to to_numpy a32070653 <Joris Van den Bossche> ARROW-6749: Let Array.to_numpy use general conversion code with zero_copy_only=True Authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-11-14 18:18:52 +01:00
By default, tries to return a view of this array. This is only
supported for primitive arrays with the same memory layout as NumPy
(i.e. integers, floating point, ..) and without any nulls.
For the extension arrays, this method simply delegates to the
underlying storage array.
ARROW-6749: [Python] Let Array.to_numpy use general conversion code with zero_copy_only=True `Array.to_numpy` converts to a numpy array zero-copy. It currently does that with a custom `np.frombuffer` (although with a bug for timestamp data, which was the original report in [ARROW-6749](https://issues.apache.org/jira/browse/ARROW-6749)), while we also have the `zero_copy_only` guarantee in the arrow->python conversion code. So here I try to switch to that. - I added a zero_copy conversion for Timestamp/Duration. I *think* this can correctly be done since the memory layout for the actual values is identical with numpy (not sure if there is a specific reason it was not done before) - One consequence of using the conversion code is that the resulting numpy array is non-writable. While the current `to_numpy` created a writable array (and the tests actually used this property to check the zero-copy assumption, which is why tests are now failing). Are we OK with that restriction? Closes #5718 from jorisvandenbossche/ARROW-6749-to_numpy-datetimes-zero-copy and squashes the following commits: 1e0c5a7cf <Joris Van den Bossche> lint 5e723f307 <Joris Van den Bossche> update for feedback a4f4c4517 <Joris Van den Bossche> fix pandas tests c9161df9b <Joris Van den Bossche> add zero_copy_only and writable keywords to to_numpy a32070653 <Joris Van den Bossche> ARROW-6749: Let Array.to_numpy use general conversion code with zero_copy_only=True Authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-11-14 18:18:52 +01:00
Parameters
----------
zero_copy_only : bool, default True
If True, an exception will be raised if the conversion to a numpy
array would require copying the underlying data (e.g. in presence
of nulls, or for non-primitive types).
writable : bool, default False
For numpy arrays created with zero copy (view on the Arrow data),
the resulting array is not writable (Arrow data is immutable).
By setting this to True, a copy of the array is made to ensure
it is writable.
Returns
-------
array : numpy.ndarray
"""
self._assert_cpu()
if np is None:
raise ImportError(
"Cannot return a numpy.ndarray if NumPy is not present")
ARROW-6749: [Python] Let Array.to_numpy use general conversion code with zero_copy_only=True `Array.to_numpy` converts to a numpy array zero-copy. It currently does that with a custom `np.frombuffer` (although with a bug for timestamp data, which was the original report in [ARROW-6749](https://issues.apache.org/jira/browse/ARROW-6749)), while we also have the `zero_copy_only` guarantee in the arrow->python conversion code. So here I try to switch to that. - I added a zero_copy conversion for Timestamp/Duration. I *think* this can correctly be done since the memory layout for the actual values is identical with numpy (not sure if there is a specific reason it was not done before) - One consequence of using the conversion code is that the resulting numpy array is non-writable. While the current `to_numpy` created a writable array (and the tests actually used this property to check the zero-copy assumption, which is why tests are now failing). Are we OK with that restriction? Closes #5718 from jorisvandenbossche/ARROW-6749-to_numpy-datetimes-zero-copy and squashes the following commits: 1e0c5a7cf <Joris Van den Bossche> lint 5e723f307 <Joris Van den Bossche> update for feedback a4f4c4517 <Joris Van den Bossche> fix pandas tests c9161df9b <Joris Van den Bossche> add zero_copy_only and writable keywords to to_numpy a32070653 <Joris Van den Bossche> ARROW-6749: Let Array.to_numpy use general conversion code with zero_copy_only=True Authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-11-14 18:18:52 +01:00
cdef:
PyObject* out
PandasOptions c_options
object values
if zero_copy_only and writable:
raise ValueError(
"Cannot return a writable array if asking for zero-copy")
# If there are nulls and the array is a DictionaryArray
# decoding the dictionary will make sure nulls are correctly handled.
# Decoding a dictionary does imply a copy by the way,
# so it can't be done if the user requested a zero_copy.
c_options.decode_dictionaries = True
ARROW-6749: [Python] Let Array.to_numpy use general conversion code with zero_copy_only=True `Array.to_numpy` converts to a numpy array zero-copy. It currently does that with a custom `np.frombuffer` (although with a bug for timestamp data, which was the original report in [ARROW-6749](https://issues.apache.org/jira/browse/ARROW-6749)), while we also have the `zero_copy_only` guarantee in the arrow->python conversion code. So here I try to switch to that. - I added a zero_copy conversion for Timestamp/Duration. I *think* this can correctly be done since the memory layout for the actual values is identical with numpy (not sure if there is a specific reason it was not done before) - One consequence of using the conversion code is that the resulting numpy array is non-writable. While the current `to_numpy` created a writable array (and the tests actually used this property to check the zero-copy assumption, which is why tests are now failing). Are we OK with that restriction? Closes #5718 from jorisvandenbossche/ARROW-6749-to_numpy-datetimes-zero-copy and squashes the following commits: 1e0c5a7cf <Joris Van den Bossche> lint 5e723f307 <Joris Van den Bossche> update for feedback a4f4c4517 <Joris Van den Bossche> fix pandas tests c9161df9b <Joris Van den Bossche> add zero_copy_only and writable keywords to to_numpy a32070653 <Joris Van den Bossche> ARROW-6749: Let Array.to_numpy use general conversion code with zero_copy_only=True Authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-11-14 18:18:52 +01:00
c_options.zero_copy_only = zero_copy_only
c_options.to_numpy = True
ARROW-6749: [Python] Let Array.to_numpy use general conversion code with zero_copy_only=True `Array.to_numpy` converts to a numpy array zero-copy. It currently does that with a custom `np.frombuffer` (although with a bug for timestamp data, which was the original report in [ARROW-6749](https://issues.apache.org/jira/browse/ARROW-6749)), while we also have the `zero_copy_only` guarantee in the arrow->python conversion code. So here I try to switch to that. - I added a zero_copy conversion for Timestamp/Duration. I *think* this can correctly be done since the memory layout for the actual values is identical with numpy (not sure if there is a specific reason it was not done before) - One consequence of using the conversion code is that the resulting numpy array is non-writable. While the current `to_numpy` created a writable array (and the tests actually used this property to check the zero-copy assumption, which is why tests are now failing). Are we OK with that restriction? Closes #5718 from jorisvandenbossche/ARROW-6749-to_numpy-datetimes-zero-copy and squashes the following commits: 1e0c5a7cf <Joris Van den Bossche> lint 5e723f307 <Joris Van den Bossche> update for feedback a4f4c4517 <Joris Van den Bossche> fix pandas tests c9161df9b <Joris Van den Bossche> add zero_copy_only and writable keywords to to_numpy a32070653 <Joris Van den Bossche> ARROW-6749: Let Array.to_numpy use general conversion code with zero_copy_only=True Authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-11-14 18:18:52 +01:00
with nogil:
check_status(ConvertArrayToPandas(c_options, self.sp_array,
self, &out))
# wrap_array_output uses pandas to convert to Categorical, here
# always convert to numpy array without pandas dependency
ARROW-6749: [Python] Let Array.to_numpy use general conversion code with zero_copy_only=True `Array.to_numpy` converts to a numpy array zero-copy. It currently does that with a custom `np.frombuffer` (although with a bug for timestamp data, which was the original report in [ARROW-6749](https://issues.apache.org/jira/browse/ARROW-6749)), while we also have the `zero_copy_only` guarantee in the arrow->python conversion code. So here I try to switch to that. - I added a zero_copy conversion for Timestamp/Duration. I *think* this can correctly be done since the memory layout for the actual values is identical with numpy (not sure if there is a specific reason it was not done before) - One consequence of using the conversion code is that the resulting numpy array is non-writable. While the current `to_numpy` created a writable array (and the tests actually used this property to check the zero-copy assumption, which is why tests are now failing). Are we OK with that restriction? Closes #5718 from jorisvandenbossche/ARROW-6749-to_numpy-datetimes-zero-copy and squashes the following commits: 1e0c5a7cf <Joris Van den Bossche> lint 5e723f307 <Joris Van den Bossche> update for feedback a4f4c4517 <Joris Van den Bossche> fix pandas tests c9161df9b <Joris Van den Bossche> add zero_copy_only and writable keywords to to_numpy a32070653 <Joris Van den Bossche> ARROW-6749: Let Array.to_numpy use general conversion code with zero_copy_only=True Authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-11-14 18:18:52 +01:00
array = PyObject_to_object(out)
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
ARROW-6749: [Python] Let Array.to_numpy use general conversion code with zero_copy_only=True `Array.to_numpy` converts to a numpy array zero-copy. It currently does that with a custom `np.frombuffer` (although with a bug for timestamp data, which was the original report in [ARROW-6749](https://issues.apache.org/jira/browse/ARROW-6749)), while we also have the `zero_copy_only` guarantee in the arrow->python conversion code. So here I try to switch to that. - I added a zero_copy conversion for Timestamp/Duration. I *think* this can correctly be done since the memory layout for the actual values is identical with numpy (not sure if there is a specific reason it was not done before) - One consequence of using the conversion code is that the resulting numpy array is non-writable. While the current `to_numpy` created a writable array (and the tests actually used this property to check the zero-copy assumption, which is why tests are now failing). Are we OK with that restriction? Closes #5718 from jorisvandenbossche/ARROW-6749-to_numpy-datetimes-zero-copy and squashes the following commits: 1e0c5a7cf <Joris Van den Bossche> lint 5e723f307 <Joris Van den Bossche> update for feedback a4f4c4517 <Joris Van den Bossche> fix pandas tests c9161df9b <Joris Van den Bossche> add zero_copy_only and writable keywords to to_numpy a32070653 <Joris Van den Bossche> ARROW-6749: Let Array.to_numpy use general conversion code with zero_copy_only=True Authored-by: Joris Van den Bossche <jorisvandenbossche@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-11-14 18:18:52 +01:00
if writable and not array.flags.writeable:
# if the conversion already needed to a copy, writeable is True
array = array.copy()
return array
GH-39010: [Python] Introduce `maps_as_pydicts` parameter for `to_pylist`, `to_pydict`, `as_py` (#45471) ### Rationale for this change Currently, unfortunately `MapScalar`/`Array` types are not deserialized into proper Python `dict`s, which is unfortunate since this breaks "roundtrips" from Python -> Arrow -> Python: ``` import pyarrow as pa schema = pa.schema([pa.field('x', pa.map_(pa.string(), pa.int64()))]) data = [{'x': {'a': 1}}] pa.RecordBatch.from_pylist(data, schema=schema).to_pylist() # [{'x': [('a', 1)]}] ``` This is especially bad when storing TiBs of deeply nested data (think of lists in structs in maps...) that were created from Python and serialized into Arrow/Parquet, since they can't be read in again with native `pyarrow` methods without doing extremely ugly and computationally costly workarounds. ### What changes are included in this PR? A new parameter `maps_as_pydicts` is introduced to `to_pylist`, `to_pydict`, `as_py` which will allow proper roundtrips: ``` import pyarrow as pa schema = pa.schema([pa.field('x', pa.map_(pa.string(), pa.int64()))]) data = [{'x': {'a': 1}}] pa.RecordBatch.from_pylist(data, schema=schema).to_pylist(maps_as_pydicts="strict") # [{'x': {'a': 1}}] ``` ### Are these changes tested? Yes. There are tests for `to_pylist` and `to_pydict` included for `pyarrow.Table`, whilst low-level `MapScalar` and especially a nesting with `ListScalar` and `StructScalar` is tested. Also, duplicate keys now should throw an error, which is also tested for. ### Are there any user-facing changes? No callsites should be broken, simply a new keyword-only optional parameter is added. * GitHub Issue: #39010 Authored-by: Jonas Dedden <university@jonas-dedden.de> Signed-off-by: Antoine Pitrou <antoine@python.org>
2025-02-20 16:17:48 +01:00
def to_pylist(self, *, maps_as_pydicts=None):
"""
Convert to a list of native Python objects.
GH-39010: [Python] Introduce `maps_as_pydicts` parameter for `to_pylist`, `to_pydict`, `as_py` (#45471) ### Rationale for this change Currently, unfortunately `MapScalar`/`Array` types are not deserialized into proper Python `dict`s, which is unfortunate since this breaks "roundtrips" from Python -> Arrow -> Python: ``` import pyarrow as pa schema = pa.schema([pa.field('x', pa.map_(pa.string(), pa.int64()))]) data = [{'x': {'a': 1}}] pa.RecordBatch.from_pylist(data, schema=schema).to_pylist() # [{'x': [('a', 1)]}] ``` This is especially bad when storing TiBs of deeply nested data (think of lists in structs in maps...) that were created from Python and serialized into Arrow/Parquet, since they can't be read in again with native `pyarrow` methods without doing extremely ugly and computationally costly workarounds. ### What changes are included in this PR? A new parameter `maps_as_pydicts` is introduced to `to_pylist`, `to_pydict`, `as_py` which will allow proper roundtrips: ``` import pyarrow as pa schema = pa.schema([pa.field('x', pa.map_(pa.string(), pa.int64()))]) data = [{'x': {'a': 1}}] pa.RecordBatch.from_pylist(data, schema=schema).to_pylist(maps_as_pydicts="strict") # [{'x': {'a': 1}}] ``` ### Are these changes tested? Yes. There are tests for `to_pylist` and `to_pydict` included for `pyarrow.Table`, whilst low-level `MapScalar` and especially a nesting with `ListScalar` and `StructScalar` is tested. Also, duplicate keys now should throw an error, which is also tested for. ### Are there any user-facing changes? No callsites should be broken, simply a new keyword-only optional parameter is added. * GitHub Issue: #39010 Authored-by: Jonas Dedden <university@jonas-dedden.de> Signed-off-by: Antoine Pitrou <antoine@python.org>
2025-02-20 16:17:48 +01:00
Parameters
----------
maps_as_pydicts : str, optional, default `None`
Valid values are `None`, 'lossy', or 'strict'.
The default behavior (`None`), is to convert Arrow Map arrays to
Python association lists (list-of-tuples) in the same order as the
Arrow Map, as in [(key1, value1), (key2, value2), ...].
If 'lossy' or 'strict', convert Arrow Map arrays to native Python dicts.
If 'lossy', whenever duplicate keys are detected, a warning will be printed.
The last seen value of a duplicate key will be in the Python dictionary.
If 'strict', this instead results in an exception being raised when detected.
Returns
-------
lst : list
"""
self._assert_cpu()
GH-39010: [Python] Introduce `maps_as_pydicts` parameter for `to_pylist`, `to_pydict`, `as_py` (#45471) ### Rationale for this change Currently, unfortunately `MapScalar`/`Array` types are not deserialized into proper Python `dict`s, which is unfortunate since this breaks "roundtrips" from Python -> Arrow -> Python: ``` import pyarrow as pa schema = pa.schema([pa.field('x', pa.map_(pa.string(), pa.int64()))]) data = [{'x': {'a': 1}}] pa.RecordBatch.from_pylist(data, schema=schema).to_pylist() # [{'x': [('a', 1)]}] ``` This is especially bad when storing TiBs of deeply nested data (think of lists in structs in maps...) that were created from Python and serialized into Arrow/Parquet, since they can't be read in again with native `pyarrow` methods without doing extremely ugly and computationally costly workarounds. ### What changes are included in this PR? A new parameter `maps_as_pydicts` is introduced to `to_pylist`, `to_pydict`, `as_py` which will allow proper roundtrips: ``` import pyarrow as pa schema = pa.schema([pa.field('x', pa.map_(pa.string(), pa.int64()))]) data = [{'x': {'a': 1}}] pa.RecordBatch.from_pylist(data, schema=schema).to_pylist(maps_as_pydicts="strict") # [{'x': {'a': 1}}] ``` ### Are these changes tested? Yes. There are tests for `to_pylist` and `to_pydict` included for `pyarrow.Table`, whilst low-level `MapScalar` and especially a nesting with `ListScalar` and `StructScalar` is tested. Also, duplicate keys now should throw an error, which is also tested for. ### Are there any user-facing changes? No callsites should be broken, simply a new keyword-only optional parameter is added. * GitHub Issue: #39010 Authored-by: Jonas Dedden <university@jonas-dedden.de> Signed-off-by: Antoine Pitrou <antoine@python.org>
2025-02-20 16:17:48 +01:00
return [x.as_py(maps_as_pydicts=maps_as_pydicts) for x in self]
def tolist(self):
"""
Alias of to_pylist for compatibility with NumPy.
"""
return self.to_pylist()
def validate(self, *, full=False):
"""
Perform validation checks. An exception is raised if validation fails.
By default only cheap validation checks are run. Pass `full=True`
for thorough validation checks (potentially O(n)).
Parameters
----------
full : bool, default False
If True, run expensive checks, otherwise cheap checks only.
Raises
------
ArrowInvalid
"""
if full:
self._assert_cpu()
with nogil:
check_status(self.ap.ValidateFull())
else:
with nogil:
check_status(self.ap.Validate())
@property
def offset(self):
"""
A relative position into another array's data.
The purpose is to enable zero-copy slicing. This value defaults to zero
but must be applied on all operations with the physical storage
buffers.
"""
return self.sp_array.get().offset()
def buffers(self):
"""
Return a list of Buffer objects pointing to this array's physical
storage.
To correctly interpret these buffers, you need to also apply the offset
multiplied with the size of the stored data type.
"""
res = []
_append_array_buffers(self.sp_array.get().data().get(), res)
return res
def copy_to(self, destination):
"""
Construct a copy of the array with all buffers on destination
device.
This method recursively copies the array's buffers and those of its
children onto the destination MemoryManager device and returns the
new Array.
Parameters
----------
destination : pyarrow.MemoryManager or pyarrow.Device
The destination device to copy the array to.
Returns
-------
Array
"""
cdef:
shared_ptr[CArray] c_array
shared_ptr[CMemoryManager] c_memory_manager
if isinstance(destination, Device):
c_memory_manager = (<Device>destination).unwrap().get().default_memory_manager()
elif isinstance(destination, MemoryManager):
c_memory_manager = (<MemoryManager>destination).unwrap()
else:
raise TypeError(
"Argument 'destination' has incorrect type (expected a "
f"pyarrow Device or MemoryManager, got {type(destination)})"
)
with nogil:
c_array = GetResultValue(self.ap.CopyTo(c_memory_manager))
return pyarrow_wrap_array(c_array)
def _export_to_c(self, out_ptr, out_schema_ptr=0):
"""
Export to a C ArrowArray struct, given its pointer.
If a C ArrowSchema struct pointer is also given, the array type
is exported to it at the same time.
Parameters
----------
out_ptr: int
The raw pointer to a C ArrowArray struct.
out_schema_ptr: int (optional)
The raw pointer to a C ArrowSchema struct.
Be careful: if you don't pass the ArrowArray struct to a consumer,
array memory will leak. This is a low-level function intended for
expert users.
"""
cdef:
void* c_ptr = _as_c_pointer(out_ptr)
void* c_schema_ptr = _as_c_pointer(out_schema_ptr,
allow_null=True)
with nogil:
check_status(ExportArray(deref(self.sp_array),
<ArrowArray*> c_ptr,
<ArrowSchema*> c_schema_ptr))
@staticmethod
def _import_from_c(in_ptr, type):
"""
Import Array from a C ArrowArray struct, given its pointer
and the imported array type.
Parameters
----------
in_ptr: int
The raw pointer to a C ArrowArray struct.
type: DataType or int
Either a DataType object, or the raw pointer to a C ArrowSchema
struct.
This is a low-level function intended for expert users.
"""
cdef:
void* c_ptr = _as_c_pointer(in_ptr)
void* c_type_ptr
shared_ptr[CArray] c_array
c_type = pyarrow_unwrap_data_type(type)
if c_type == nullptr:
# Not a DataType object, perhaps a raw ArrowSchema pointer
c_type_ptr = _as_c_pointer(type)
with nogil:
c_array = GetResultValue(ImportArray(
<ArrowArray*> c_ptr, <ArrowSchema*> c_type_ptr))
else:
with nogil:
c_array = GetResultValue(ImportArray(<ArrowArray*> c_ptr,
c_type))
return pyarrow_wrap_array(c_array)
def __arrow_c_array__(self, requested_schema=None):
"""
Get a pair of PyCapsules containing a C ArrowArray representation of the object.
Parameters
----------
requested_schema : PyCapsule | None
A PyCapsule containing a C ArrowSchema representation of a requested
schema. PyArrow will attempt to cast the array to this data type.
If None, the array will be returned as-is, with a type matching the
one returned by :meth:`__arrow_c_schema__()`.
Returns
-------
Tuple[PyCapsule, PyCapsule]
A pair of PyCapsules containing a C ArrowSchema and ArrowArray,
respectively.
"""
self._assert_cpu()
cdef:
ArrowArray* c_array
ArrowSchema* c_schema
shared_ptr[CArray] inner_array
if requested_schema is not None:
target_type = DataType._import_from_c_capsule(requested_schema)
if target_type != self.type:
try:
casted_array = _pc().cast(self, target_type, safe=True)
inner_array = pyarrow_unwrap_array(casted_array)
except ArrowInvalid as e:
raise ValueError(
f"Could not cast {self.type} to requested type {target_type}: {e}"
)
else:
inner_array = self.sp_array
else:
inner_array = self.sp_array
schema_capsule = alloc_c_schema(&c_schema)
array_capsule = alloc_c_array(&c_array)
with nogil:
check_status(ExportArray(deref(inner_array), c_array, c_schema))
return schema_capsule, array_capsule
@staticmethod
def _import_from_c_capsule(schema_capsule, array_capsule):
cdef:
ArrowSchema* c_schema
ArrowArray* c_array
shared_ptr[CArray] array
c_schema = <ArrowSchema*> PyCapsule_GetPointer(schema_capsule, 'arrow_schema')
c_array = <ArrowArray*> PyCapsule_GetPointer(array_capsule, 'arrow_array')
with nogil:
array = GetResultValue(ImportArray(c_array, c_schema))
return pyarrow_wrap_array(array)
def _export_to_c_device(self, out_ptr, out_schema_ptr=0):
"""
Export to a C ArrowDeviceArray struct, given its pointer.
If a C ArrowSchema struct pointer is also given, the array type
is exported to it at the same time.
Parameters
----------
out_ptr: int
The raw pointer to a C ArrowDeviceArray struct.
out_schema_ptr: int (optional)
The raw pointer to a C ArrowSchema struct.
Be careful: if you don't pass the ArrowDeviceArray struct to a consumer,
array memory will leak. This is a low-level function intended for
expert users.
"""
cdef:
void* c_ptr = _as_c_pointer(out_ptr)
void* c_schema_ptr = _as_c_pointer(out_schema_ptr,
allow_null=True)
with nogil:
check_status(ExportDeviceArray(
deref(self.sp_array), <shared_ptr[CSyncEvent]>NULL,
<ArrowDeviceArray*> c_ptr, <ArrowSchema*> c_schema_ptr))
@staticmethod
def _import_from_c_device(in_ptr, type):
"""
Import Array from a C ArrowDeviceArray struct, given its pointer
and the imported array type.
Parameters
----------
in_ptr: int
The raw pointer to a C ArrowDeviceArray struct.
type: DataType or int
Either a DataType object, or the raw pointer to a C ArrowSchema
struct.
This is a low-level function intended for expert users.
"""
cdef:
ArrowDeviceArray* c_device_array = <ArrowDeviceArray*>_as_c_pointer(in_ptr)
void* c_type_ptr
shared_ptr[CArray] c_array
if c_device_array.device_type == ARROW_DEVICE_CUDA:
_ensure_cuda_loaded()
c_type = pyarrow_unwrap_data_type(type)
if c_type == nullptr:
# Not a DataType object, perhaps a raw ArrowSchema pointer
c_type_ptr = _as_c_pointer(type)
with nogil:
c_array = GetResultValue(
ImportDeviceArray(c_device_array, <ArrowSchema*> c_type_ptr)
)
else:
with nogil:
c_array = GetResultValue(
ImportDeviceArray(c_device_array, c_type)
)
return pyarrow_wrap_array(c_array)
def __arrow_c_device_array__(self, requested_schema=None, **kwargs):
"""
Get a pair of PyCapsules containing a C ArrowDeviceArray representation
of the object.
Parameters
----------
requested_schema : PyCapsule | None
A PyCapsule containing a C ArrowSchema representation of a requested
schema. PyArrow will attempt to cast the array to this data type.
If None, the array will be returned as-is, with a type matching the
one returned by :meth:`__arrow_c_schema__()`.
kwargs
Currently no additional keyword arguments are supported, but
this method will accept any keyword with a value of ``None``
for compatibility with future keywords.
Returns
-------
Tuple[PyCapsule, PyCapsule]
A pair of PyCapsules containing a C ArrowSchema and ArrowDeviceArray,
respectively.
"""
cdef:
ArrowDeviceArray* c_array
ArrowSchema* c_schema
shared_ptr[CArray] inner_array
non_default_kwargs = [
name for name, value in kwargs.items() if value is not None
]
if non_default_kwargs:
raise NotImplementedError(
f"Received unsupported keyword argument(s): {non_default_kwargs}"
)
if requested_schema is not None:
target_type = DataType._import_from_c_capsule(requested_schema)
if target_type != self.type:
if not self.is_cpu:
raise NotImplementedError(
"Casting to a requested schema is only supported for CPU data"
)
try:
casted_array = _pc().cast(self, target_type, safe=True)
inner_array = pyarrow_unwrap_array(casted_array)
except ArrowInvalid as e:
raise ValueError(
f"Could not cast {self.type} to requested type {target_type}: {e}"
)
else:
inner_array = self.sp_array
else:
inner_array = self.sp_array
schema_capsule = alloc_c_schema(&c_schema)
array_capsule = alloc_c_device_array(&c_array)
with nogil:
check_status(ExportDeviceArray(
deref(inner_array), <shared_ptr[CSyncEvent]>NULL,
c_array, c_schema))
return schema_capsule, array_capsule
@staticmethod
def _import_from_c_device_capsule(schema_capsule, array_capsule):
cdef:
ArrowSchema* c_schema
ArrowDeviceArray* c_array
shared_ptr[CArray] array
c_schema = <ArrowSchema*> PyCapsule_GetPointer(schema_capsule, 'arrow_schema')
c_array = <ArrowDeviceArray*> PyCapsule_GetPointer(
array_capsule, 'arrow_device_array'
)
with nogil:
array = GetResultValue(ImportDeviceArray(c_array, c_schema))
return pyarrow_wrap_array(array)
def __dlpack__(self, stream=None):
"""
Export a primitive array as a DLPack capsule.
Parameters
----------
stream : int, optional
A Python integer representing a pointer to a stream. Currently not supported.
Stream is provided by the consumer to the producer to instruct the producer
to ensure that operations can safely be performed on the array.
Returns
-------
capsule : PyCapsule
A DLPack capsule for the array, pointing to a DLManagedTensor.
"""
if stream is None:
dlm_tensor = GetResultValue(ExportArrayToDLPack(self.sp_array))
return PyCapsule_New(dlm_tensor, 'dltensor', dlpack_pycapsule_deleter)
else:
raise NotImplementedError(
"Only stream=None is supported."
)
def __dlpack_device__(self):
"""
Return the DLPack device tuple this arrays resides on.
Returns
-------
tuple : Tuple[int, int]
Tuple with index specifying the type of the device (where
CPU = 1, see cpp/src/arrow/c/dpack_abi.h) and index of the
device which is 0 by default for CPU.
"""
device = GetResultValue(ExportDevice(self.sp_array))
return device.device_type, device.device_id
@property
def device_type(self):
"""
The device type where the array resides.
Returns
-------
DeviceAllocationType
"""
return _wrap_device_allocation_type(self.sp_array.get().device_type())
@property
def is_cpu(self):
"""
Whether the array is CPU-accessible.
"""
return self.device_type == DeviceAllocationType.CPU
cdef void _assert_cpu(self) except *:
if self.sp_array.get().device_type() != CDeviceAllocationType_kCPU:
raise NotImplementedError("Implemented only for data on CPU device")
@property
def statistics(self):
"""
Statistics of the array.
"""
cdef ArrayStatistics stat
sp_stat = self.sp_array.get().statistics()
if sp_stat.get() == nullptr:
return None
else:
stat = ArrayStatistics.__new__(ArrayStatistics)
stat.init(sp_stat)
return stat
def __abs__(self):
self._assert_cpu()
return _pc().call_function('abs_checked', [self])
def __add__(self, object other):
self._assert_cpu()
return _pc().call_function('add_checked', [self, other])
def __truediv__(self, object other):
self._assert_cpu()
return _pc().call_function('divide_checked', [self, other])
def __mul__(self, object other):
self._assert_cpu()
return _pc().call_function('multiply_checked', [self, other])
def __neg__(self):
self._assert_cpu()
return _pc().call_function('negate_checked', [self])
def __pow__(self, object other):
self._assert_cpu()
return _pc().call_function('power_checked', [self, other])
def __sub__(self, object other):
self._assert_cpu()
return _pc().call_function('subtract_checked', [self, other])
def __and__(self, object other):
self._assert_cpu()
return _pc().call_function('bit_wise_and', [self, other])
def __or__(self, object other):
self._assert_cpu()
return _pc().call_function('bit_wise_or', [self, other])
def __xor__(self, object other):
self._assert_cpu()
return _pc().call_function('bit_wise_xor', [self, other])
def __lshift__(self, object other):
self._assert_cpu()
return _pc().call_function('shift_left_checked', [self, other])
def __rshift__(self, object other):
self._assert_cpu()
return _pc().call_function('shift_right_checked', [self, other])
cdef _array_like_to_pandas(obj, options, types_mapper):
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
cdef:
PyObject* out
PandasOptions c_options = _convert_pandas_options(options)
original_type = obj.type
name = obj._name
dtype = None
if types_mapper:
dtype = types_mapper(original_type)
elif original_type.id == _Type_EXTENSION:
try:
dtype = original_type.to_pandas_dtype()
except NotImplementedError:
pass
elif pandas_api.uses_string_dtype() and not options["strings_to_categorical"] and (
original_type.id == _Type_STRING or
original_type.id == _Type_LARGE_STRING or
original_type.id == _Type_STRING_VIEW
):
# for pandas 3.0+, use pandas' new default string dtype
dtype = pandas_api.pd.StringDtype(na_value=np.nan)
# Only call __from_arrow__ for Arrow extension types or when explicitly
# overridden via types_mapper
if hasattr(dtype, '__from_arrow__'):
arr = dtype.__from_arrow__(obj)
return pandas_api.series(arr, name=name, copy=False)
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
if pandas_api.is_v1():
# ARROW-3789: Coerce date/timestamp types to datetime64[ns]
c_options.coerce_temporal_nanoseconds = True
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
if isinstance(obj, Array):
with nogil:
check_status(ConvertArrayToPandas(c_options,
(<Array> obj).sp_array,
obj, &out))
elif isinstance(obj, ChunkedArray):
with nogil:
check_status(libarrow_python.ConvertChunkedArrayToPandas(
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
c_options,
(<ChunkedArray> obj).sp_chunked_array,
obj, &out))
arr = wrap_array_output(out)
if (isinstance(original_type, TimestampType) and
options["timestamp_as_object"]):
# ARROW-5359 - need to specify object dtype to avoid pandas to
# coerce back to ns resolution
dtype = "object"
elif types_mapper:
dtype = types_mapper(original_type)
else:
dtype = None
result = pandas_api.series(arr, dtype=dtype, name=name, copy=False)
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
if (isinstance(original_type, TimestampType) and
original_type.tz is not None and
# can be object dtype for non-ns and timestamp_as_object=True
result.dtype.kind == "M"):
ARROW-3789: [Python] Use common conversion path for Arrow to pandas.Series/DataFrame. Zero copy optimizations for DataFrame, add split_blocks and self_destruct options The primary goal of this patch is to provide a way for some users to avoid memory doubling with converting from Arrow to pandas. This took me entirely too much time to get right, but partly I was attempting to disentangle some of the technical debt and overdue refactoring in arrow_to_pandas.cc. Summary of what's here: - Refactor ChunkedArray->Series and Table->DataFrame conversion paths to use the exact same code rather than two implementations of the same thing with slightly different behavior. The `ArrowDeserializer` helper class is now gone - Do zero-copy construction of internal DataFrame blocks for the case of a contiguous non-nullable array and a block with only 1 column represented - Add `split_blocks` option to `to_pandas` which constructs one block per DataFrame column, resulting in more zero-copy opportunities. Note that pandas's internal "consolidation" can still cause memory doubling (see discussion about this in https://github.com/pandas-dev/pandas/issues/10556) - Add `self_destruct` option to `to_pandas` which releases the Table's internal buffers as soon as they are converted to the required pandas structure. This allows memory to be reclaimed by the OS as conversion is taking place rather than having a forced memory-doubling and then post-facto reclamation (which has been causing OOM for some users) The most conservative invocation of `to_pandas` now would be `table.to_pandas(use_threads=False, split_blocks=True, self_destruct=True)` Note that the self-destruct option makes the `Table` object unsafe for further use. This is a bit dissatisfying but I wasn't sure how else to provide this capability. Closes #6067 from wesm/ARROW-3789 and squashes the following commits: 3b4260283 <Wes McKinney> Code review comments 8f39cce05 <Wes McKinney> Add some documentation. Try fixing MSVC warnings c22d280dc <Wes McKinney> Fix one MSVC cast warning 43068032c <Wes McKinney> Add "split blocks" and "self destruct" options to Table.to_pandas, with zero-copy operations for improved memory use when converting from Arrow to pandas Authored-by: Wes McKinney <wesm+git@apache.org> Signed-off-by: Wes McKinney <wesm+git@apache.org>
2020-01-14 18:25:01 -06:00
from pyarrow.pandas_compat import make_tz_aware
result = make_tz_aware(result, original_type.tz)
return result
cdef wrap_array_output(PyObject* output):
cdef object obj = PyObject_to_object(output)
if isinstance(obj, dict):
return _pandas_api.categorical_type.from_codes(
obj['indices'], categories=obj['dictionary'], ordered=obj['ordered']
)
else:
return obj
cdef class NullArray(Array):
"""
Concrete class for Arrow arrays of null data type.
"""
cdef class BooleanArray(Array):
"""
Concrete class for Arrow arrays of boolean data type.
"""
@property
def false_count(self):
return (<CBooleanArray*> self.ap).false_count()
@property
def true_count(self):
return (<CBooleanArray*> self.ap).true_count()
cdef class NumericArray(Array):
"""
A base class for Arrow numeric arrays.
"""
cdef class IntegerArray(NumericArray):
"""
A base class for Arrow integer arrays.
"""
cdef class FloatingPointArray(NumericArray):
"""
A base class for Arrow floating-point arrays.
"""
cdef class Int8Array(IntegerArray):
"""
Concrete class for Arrow arrays of int8 data type.
"""
cdef class UInt8Array(IntegerArray):
"""
Concrete class for Arrow arrays of uint8 data type.
"""
cdef class Int16Array(IntegerArray):
"""
Concrete class for Arrow arrays of int16 data type.
"""
cdef class UInt16Array(IntegerArray):
"""
Concrete class for Arrow arrays of uint16 data type.
"""
cdef class Int32Array(IntegerArray):
"""
Concrete class for Arrow arrays of int32 data type.
"""
cdef class UInt32Array(IntegerArray):
"""
Concrete class for Arrow arrays of uint32 data type.
"""
cdef class Int64Array(IntegerArray):
"""
Concrete class for Arrow arrays of int64 data type.
"""
cdef class UInt64Array(IntegerArray):
"""
Concrete class for Arrow arrays of uint64 data type.
"""
cdef class Date32Array(NumericArray):
"""
Concrete class for Arrow arrays of date32 data type.
"""
cdef class Date64Array(NumericArray):
"""
Concrete class for Arrow arrays of date64 data type.
"""
cdef class TimestampArray(NumericArray):
"""
Concrete class for Arrow arrays of timestamp data type.
"""
cdef class Time32Array(NumericArray):
"""
Concrete class for Arrow arrays of time32 data type.
"""
cdef class Time64Array(NumericArray):
"""
Concrete class for Arrow arrays of time64 data type.
"""
cdef class DurationArray(NumericArray):
"""
Concrete class for Arrow arrays of duration data type.
"""
cdef class MonthDayNanoIntervalArray(Array):
"""
Concrete class for Arrow arrays of interval[MonthDayNano] type.
"""
GH-39010: [Python] Introduce `maps_as_pydicts` parameter for `to_pylist`, `to_pydict`, `as_py` (#45471) ### Rationale for this change Currently, unfortunately `MapScalar`/`Array` types are not deserialized into proper Python `dict`s, which is unfortunate since this breaks "roundtrips" from Python -> Arrow -> Python: ``` import pyarrow as pa schema = pa.schema([pa.field('x', pa.map_(pa.string(), pa.int64()))]) data = [{'x': {'a': 1}}] pa.RecordBatch.from_pylist(data, schema=schema).to_pylist() # [{'x': [('a', 1)]}] ``` This is especially bad when storing TiBs of deeply nested data (think of lists in structs in maps...) that were created from Python and serialized into Arrow/Parquet, since they can't be read in again with native `pyarrow` methods without doing extremely ugly and computationally costly workarounds. ### What changes are included in this PR? A new parameter `maps_as_pydicts` is introduced to `to_pylist`, `to_pydict`, `as_py` which will allow proper roundtrips: ``` import pyarrow as pa schema = pa.schema([pa.field('x', pa.map_(pa.string(), pa.int64()))]) data = [{'x': {'a': 1}}] pa.RecordBatch.from_pylist(data, schema=schema).to_pylist(maps_as_pydicts="strict") # [{'x': {'a': 1}}] ``` ### Are these changes tested? Yes. There are tests for `to_pylist` and `to_pydict` included for `pyarrow.Table`, whilst low-level `MapScalar` and especially a nesting with `ListScalar` and `StructScalar` is tested. Also, duplicate keys now should throw an error, which is also tested for. ### Are there any user-facing changes? No callsites should be broken, simply a new keyword-only optional parameter is added. * GitHub Issue: #39010 Authored-by: Jonas Dedden <university@jonas-dedden.de> Signed-off-by: Antoine Pitrou <antoine@python.org>
2025-02-20 16:17:48 +01:00
def to_pylist(self, *, maps_as_pydicts=None):
"""
Convert to a list of native Python objects.
pyarrow.MonthDayNano is used as the native representation.
GH-39010: [Python] Introduce `maps_as_pydicts` parameter for `to_pylist`, `to_pydict`, `as_py` (#45471) ### Rationale for this change Currently, unfortunately `MapScalar`/`Array` types are not deserialized into proper Python `dict`s, which is unfortunate since this breaks "roundtrips" from Python -> Arrow -> Python: ``` import pyarrow as pa schema = pa.schema([pa.field('x', pa.map_(pa.string(), pa.int64()))]) data = [{'x': {'a': 1}}] pa.RecordBatch.from_pylist(data, schema=schema).to_pylist() # [{'x': [('a', 1)]}] ``` This is especially bad when storing TiBs of deeply nested data (think of lists in structs in maps...) that were created from Python and serialized into Arrow/Parquet, since they can't be read in again with native `pyarrow` methods without doing extremely ugly and computationally costly workarounds. ### What changes are included in this PR? A new parameter `maps_as_pydicts` is introduced to `to_pylist`, `to_pydict`, `as_py` which will allow proper roundtrips: ``` import pyarrow as pa schema = pa.schema([pa.field('x', pa.map_(pa.string(), pa.int64()))]) data = [{'x': {'a': 1}}] pa.RecordBatch.from_pylist(data, schema=schema).to_pylist(maps_as_pydicts="strict") # [{'x': {'a': 1}}] ``` ### Are these changes tested? Yes. There are tests for `to_pylist` and `to_pydict` included for `pyarrow.Table`, whilst low-level `MapScalar` and especially a nesting with `ListScalar` and `StructScalar` is tested. Also, duplicate keys now should throw an error, which is also tested for. ### Are there any user-facing changes? No callsites should be broken, simply a new keyword-only optional parameter is added. * GitHub Issue: #39010 Authored-by: Jonas Dedden <university@jonas-dedden.de> Signed-off-by: Antoine Pitrou <antoine@python.org>
2025-02-20 16:17:48 +01:00
Parameters
----------
maps_as_pydicts : str, optional, default `None`
Valid values are `None`, 'lossy', or 'strict'.
This parameter is ignored for non-nested Scalars.
Returns
-------
lst : list
"""
cdef:
CResult[PyObject*] maybe_py_list
PyObject* py_list
CMonthDayNanoIntervalArray* array
array = <CMonthDayNanoIntervalArray*>self.sp_array.get()
maybe_py_list = MonthDayNanoIntervalArrayToPyList(deref(array))
py_list = GetResultValue(maybe_py_list)
return PyObject_to_object(py_list)
cdef class HalfFloatArray(FloatingPointArray):
"""
Concrete class for Arrow arrays of float16 data type.
"""
cdef class FloatArray(FloatingPointArray):
"""
Concrete class for Arrow arrays of float32 data type.
"""
cdef class DoubleArray(FloatingPointArray):
"""
Concrete class for Arrow arrays of float64 data type.
"""
cdef class FixedSizeBinaryArray(Array):
"""
Concrete class for Arrow arrays of a fixed-size binary data type.
"""
cdef class Decima32Array(FixedSizeBinaryArray):
"""
Concrete class for Arrow arrays of decimal32 data type.
"""
cdef class Decimal64Array(FixedSizeBinaryArray):
"""
Concrete class for Arrow arrays of decimal64 data type.
"""
cdef class Decimal128Array(FixedSizeBinaryArray):
"""
Concrete class for Arrow arrays of decimal128 data type.
"""
cdef class Decimal256Array(FixedSizeBinaryArray):
"""
Concrete class for Arrow arrays of decimal256 data type.
"""
cdef class BaseListArray(Array):
def flatten(self, recursive=False):
"""
Unnest this [Large]ListArray/[Large]ListViewArray/FixedSizeListArray
according to 'recursive'.
Note that this method is different from ``self.values`` in that
it takes care of the slicing offset as well as null elements backed
by non-empty sub-lists.
Parameters
----------
recursive : bool, default False, optional
When True, flatten this logical list-array recursively until an
array of non-list values is formed.
When False, flatten only the top level.
Returns
-------
result : Array
Examples
--------
Basic logical list-array's flatten
>>> import pyarrow as pa
>>> values = [1, 2, 3, 4]
>>> offsets = [2, 1, 0]
>>> sizes = [2, 2, 2]
>>> array = pa.ListViewArray.from_arrays(offsets, sizes, values)
>>> array
<pyarrow.lib.ListViewArray object at ...>
[
[
3,
4
],
[
2,
3
],
[
1,
2
]
]
>>> array.flatten()
<pyarrow.lib.Int64Array object at ...>
[
3,
4,
2,
3,
1,
2
]
When recursive=True, nested list arrays are flattened recursively
until an array of non-list values is formed.
>>> array = pa.array([
... None,
... [
... [1, None, 2],
... None,
... [3, 4]
... ],
... [],
... [
... [],
... [5, 6],
... None
... ],
... [
... [7, 8]
... ]
... ], type=pa.list_(pa.list_(pa.int64())))
>>> array.flatten(True)
<pyarrow.lib.Int64Array object at ...>
[
1,
null,
2,
3,
4,
5,
6,
7,
8
]
"""
options = _pc().ListFlattenOptions(recursive)
return _pc().list_flatten(self, options=options)
def value_parent_indices(self):
"""
Return array of same length as list child values array where each
output value is the index of the parent list array slot containing each
child value.
Examples
--------
>>> import pyarrow as pa
>>> arr = pa.array([[1, 2, 3], [], None, [4]],
... type=pa.list_(pa.int32()))
>>> arr.value_parent_indices()
<pyarrow.lib.Int64Array object at ...>
[
0,
0,
0,
3
]
"""
return _pc().list_parent_indices(self)
def value_lengths(self):
"""
Return integers array with values equal to the respective length of
each list element. Null list values are null in the output.
Examples
--------
>>> import pyarrow as pa
>>> arr = pa.array([[1, 2, 3], [], None, [4]],
... type=pa.list_(pa.int32()))
>>> arr.value_lengths()
<pyarrow.lib.Int32Array object at ...>
[
3,
0,
null,
1
]
"""
return _pc().list_value_length(self)
cdef class ListArray(BaseListArray):
"""
Concrete class for Arrow arrays of a list data type.
"""
@staticmethod
def from_arrays(offsets, values, DataType type=None, MemoryPool pool=None, mask=None):
"""
Construct ListArray from arrays of int32 offsets and values.
Parameters
----------
offsets : Array (int32 type)
values : Array (any type)
type : DataType, optional
If not specified, a default ListType with the values' type is
used.
pool : MemoryPool, optional
mask : Array (boolean type), optional
Indicate which values are null (True) or not null (False).
Returns
-------
list_array : ListArray
Examples
--------
>>> import pyarrow as pa
>>> values = pa.array([1, 2, 3, 4])
>>> offsets = pa.array([0, 2, 4])
>>> pa.ListArray.from_arrays(offsets, values)
<pyarrow.lib.ListArray object at ...>
[
[
1,
2
],
[
3,
4
]
]
>>> # nulls in the offsets array become null lists
>>> offsets = pa.array([0, None, 2, 4])
>>> pa.ListArray.from_arrays(offsets, values)
<pyarrow.lib.ListArray object at ...>
[
[
1,
2
],
null,
[
3,
4
]
]
"""
cdef:
Array _offsets, _values
shared_ptr[CArray] out
shared_ptr[CBuffer] c_mask
cdef CMemoryPool* cpool = maybe_unbox_memory_pool(pool)
_offsets = asarray(offsets, type='int32')
_values = asarray(values)
c_mask = c_mask_inverted_from_obj(mask, pool)
if type is not None:
with nogil:
out = GetResultValue(
CListArray.FromArraysAndType(
type.sp_type, _offsets.ap[0], _values.ap[0], cpool, c_mask))
else:
with nogil:
out = GetResultValue(
CListArray.FromArrays(
_offsets.ap[0], _values.ap[0], cpool, c_mask))
cdef Array result = pyarrow_wrap_array(out)
result.validate()
return result
@property
def values(self):
"""
Return the underlying array of values which backs the ListArray
ignoring the array's offset.
If any of the list elements are null, but are backed by a
non-empty sub-list, those elements will be included in the
output.
Compare with :meth:`flatten`, which returns only the non-null
values taking into consideration the array's offset.
Returns
-------
values : Array
See Also
--------
ListArray.flatten : ...
Examples
--------
The values include null elements from sub-lists:
>>> import pyarrow as pa
>>> array = pa.array([[1, 2], None, [3, 4, None, 6]])
>>> array.values
<pyarrow.lib.Int64Array object at ...>
[
1,
2,
3,
4,
null,
6
]
If an array is sliced, the slice still uses the same
underlying data as the original array, just with an
offset. Since values ignores the offset, the values are the
same:
>>> sliced = array.slice(1, 2)
>>> sliced
<pyarrow.lib.ListArray object at ...>
[
null,
[
3,
4,
null,
6
]
]
>>> sliced.values
<pyarrow.lib.Int64Array object at ...>
[
1,
2,
3,
4,
null,
6
]
"""
cdef CListArray* arr = <CListArray*> self.ap
return pyarrow_wrap_array(arr.values())
@property
def offsets(self):
"""
Return the list offsets as an int32 array.
The returned array will not have a validity bitmap, so you cannot
expect to pass it to `ListArray.from_arrays` and get back the same
list array if the original one has nulls.
Returns
-------
offsets : Int32Array
Examples
--------
>>> import pyarrow as pa
>>> array = pa.array([[1, 2], None, [3, 4, 5]])
>>> array.offsets
<pyarrow.lib.Int32Array object at ...>
[
0,
2,
2,
5
]
"""
return pyarrow_wrap_array((<CListArray*> self.ap).offsets())
cdef class LargeListArray(BaseListArray):
"""
Concrete class for Arrow arrays of a large list data type.
Identical to ListArray, but 64-bit offsets.
"""
@staticmethod
def from_arrays(offsets, values, DataType type=None, MemoryPool pool=None, mask=None):
"""
Construct LargeListArray from arrays of int64 offsets and values.
Parameters
----------
offsets : Array (int64 type)
values : Array (any type)
type : DataType, optional
If not specified, a default ListType with the values' type is
used.
pool : MemoryPool, optional
mask : Array (boolean type), optional
Indicate which values are null (True) or not null (False).
Returns
-------
list_array : LargeListArray
"""
cdef:
Array _offsets, _values
shared_ptr[CArray] out
shared_ptr[CBuffer] c_mask
cdef CMemoryPool* cpool = maybe_unbox_memory_pool(pool)
_offsets = asarray(offsets, type='int64')
_values = asarray(values)
c_mask = c_mask_inverted_from_obj(mask, pool)
if type is not None:
with nogil:
out = GetResultValue(
CLargeListArray.FromArraysAndType(
type.sp_type, _offsets.ap[0], _values.ap[0], cpool, c_mask))
else:
with nogil:
out = GetResultValue(
CLargeListArray.FromArrays(
_offsets.ap[0], _values.ap[0], cpool, c_mask))
cdef Array result = pyarrow_wrap_array(out)
result.validate()
return result
@property
def values(self):
"""
Return the underlying array of values which backs the LargeListArray
ignoring the array's offset.
If any of the list elements are null, but are backed by a
non-empty sub-list, those elements will be included in the
output.
Compare with :meth:`flatten`, which returns only the non-null
values taking into consideration the array's offset.
Returns
-------
values : Array
See Also
--------
LargeListArray.flatten : ...
Examples
--------
The values include null elements from the sub-lists:
>>> import pyarrow as pa
>>> array = pa.array(
... [[1, 2], None, [3, 4, None, 6]],
... type=pa.large_list(pa.int32()),
... )
>>> array.values
<pyarrow.lib.Int32Array object at ...>
[
1,
2,
3,
4,
null,
6
]
If an array is sliced, the slice still uses the same
underlying data as the original array, just with an
offset. Since values ignores the offset, the values are the
same:
>>> sliced = array.slice(1, 2)
>>> sliced
<pyarrow.lib.LargeListArray object at ...>
[
null,
[
3,
4,
null,
6
]
]
>>> sliced.values
<pyarrow.lib.Int32Array object at ...>
[
1,
2,
3,
4,
null,
6
]
"""
cdef CLargeListArray* arr = <CLargeListArray*> self.ap
return pyarrow_wrap_array(arr.values())
@property
def offsets(self):
"""
Return the list offsets as an int64 array.
The returned array will not have a validity bitmap, so you cannot
expect to pass it to `LargeListArray.from_arrays` and get back the
same list array if the original one has nulls.
Returns
-------
offsets : Int64Array
"""
return pyarrow_wrap_array((<CLargeListArray*> self.ap).offsets())
cdef class ListViewArray(BaseListArray):
"""
Concrete class for Arrow arrays of a list view data type.
"""
@staticmethod
def from_arrays(offsets, sizes, values, DataType type=None, MemoryPool pool=None, mask=None):
"""
Construct ListViewArray from arrays of int32 offsets, sizes, and values.
Parameters
----------
offsets : Array (int32 type)
sizes : Array (int32 type)
values : Array (any type)
type : DataType, optional
If not specified, a default ListType with the values' type is
used.
pool : MemoryPool, optional
mask : Array (boolean type), optional
Indicate which values are null (True) or not null (False).
Returns
-------
list_view_array : ListViewArray
Examples
--------
>>> import pyarrow as pa
>>> values = pa.array([1, 2, 3, 4])
>>> offsets = pa.array([0, 1, 2])
>>> sizes = pa.array([2, 2, 2])
>>> pa.ListViewArray.from_arrays(offsets, sizes, values)
<pyarrow.lib.ListViewArray object at ...>
[
[
1,
2
],
[
2,
3
],
[
3,
4
]
]
>>> # use a null mask to represent null values
>>> mask = pa.array([False, True, False])
>>> pa.ListViewArray.from_arrays(offsets, sizes, values, mask=mask)
<pyarrow.lib.ListViewArray object at ...>
[
[
1,
2
],
null,
[
3,
4
]
]
>>> # null values can be defined in either offsets or sizes arrays
>>> # WARNING: this will result in a copy of the offsets or sizes arrays
>>> offsets = pa.array([0, None, 2])
>>> pa.ListViewArray.from_arrays(offsets, sizes, values)
<pyarrow.lib.ListViewArray object at ...>
[
[
1,
2
],
null,
[
3,
4
]
]
"""
cdef:
Array _offsets, _sizes, _values
shared_ptr[CArray] out
shared_ptr[CBuffer] c_mask
CMemoryPool* cpool = maybe_unbox_memory_pool(pool)
_offsets = asarray(offsets, type='int32')
_sizes = asarray(sizes, type='int32')
_values = asarray(values)
c_mask = c_mask_inverted_from_obj(mask, pool)
if type is not None:
with nogil:
out = GetResultValue(
CListViewArray.FromArraysAndType(
type.sp_type, _offsets.ap[0], _sizes.ap[0], _values.ap[0], cpool, c_mask))
else:
with nogil:
out = GetResultValue(
CListViewArray.FromArrays(
_offsets.ap[0], _sizes.ap[0], _values.ap[0], cpool, c_mask))
cdef Array result = pyarrow_wrap_array(out)
result.validate()
return result
@property
def values(self):
"""
Return the underlying array of values which backs the ListViewArray
ignoring the array's offset and sizes.
The values array may be out of order and/or contain additional values
that are not found in the logical representation of the array. The only
guarantee is that each non-null value in the ListView Array is contiguous.
Compare with :meth:`flatten`, which returns only the non-null
values taking into consideration the array's order and offset.
Returns
-------
values : Array
Examples
--------
The values include null elements from sub-lists:
>>> import pyarrow as pa
>>> values = [1, 2, None, 3, 4]
>>> offsets = [0, 0, 1]
>>> sizes = [2, 0, 4]
>>> array = pa.ListViewArray.from_arrays(offsets, sizes, values)
>>> array
<pyarrow.lib.ListViewArray object at ...>
[
[
1,
2
],
[],
[
2,
null,
3,
4
]
]
>>> array.values
<pyarrow.lib.Int64Array object at ...>
[
1,
2,
null,
3,
4
]
"""
cdef CListViewArray* arr = <CListViewArray*> self.ap
return pyarrow_wrap_array(arr.values())
@property
def offsets(self):
"""
Return the list offsets as an int32 array.
The returned array will not have a validity bitmap, so you cannot
expect to pass it to `ListViewArray.from_arrays` and get back the same
list array if the original one has nulls.
Returns
-------
offsets : Int32Array
Examples
--------
>>> import pyarrow as pa
>>> values = [1, 2, None, 3, 4]
>>> offsets = [0, 0, 1]
>>> sizes = [2, 0, 4]
>>> array = pa.ListViewArray.from_arrays(offsets, sizes, values)
>>> array.offsets
<pyarrow.lib.Int32Array object at ...>
[
0,
0,
1
]
"""
return pyarrow_wrap_array((<CListViewArray*> self.ap).offsets())
@property
def sizes(self):
"""
Return the list sizes as an int32 array.
The returned array will not have a validity bitmap, so you cannot
expect to pass it to `ListViewArray.from_arrays` and get back the same
list array if the original one has nulls.
Returns
-------
sizes : Int32Array
Examples
--------
>>> import pyarrow as pa
>>> values = [1, 2, None, 3, 4]
>>> offsets = [0, 0, 1]
>>> sizes = [2, 0, 4]
>>> array = pa.ListViewArray.from_arrays(offsets, sizes, values)
>>> array.sizes
<pyarrow.lib.Int32Array object at ...>
[
2,
0,
4
]
"""
return pyarrow_wrap_array((<CListViewArray*> self.ap).sizes())
cdef class LargeListViewArray(BaseListArray):
"""
Concrete class for Arrow arrays of a large list view data type.
Identical to ListViewArray, but with 64-bit offsets.
"""
@staticmethod
def from_arrays(offsets, sizes, values, DataType type=None, MemoryPool pool=None, mask=None):
"""
Construct LargeListViewArray from arrays of int64 offsets and values.
Parameters
----------
offsets : Array (int64 type)
sizes : Array (int64 type)
values : Array (any type)
type : DataType, optional
If not specified, a default ListType with the values' type is
used.
pool : MemoryPool, optional
mask : Array (boolean type), optional
Indicate which values are null (True) or not null (False).
Returns
-------
list_view_array : LargeListViewArray
Examples
--------
>>> import pyarrow as pa
>>> values = pa.array([1, 2, 3, 4])
>>> offsets = pa.array([0, 1, 2])
>>> sizes = pa.array([2, 2, 2])
>>> pa.LargeListViewArray.from_arrays(offsets, sizes, values)
<pyarrow.lib.LargeListViewArray object at ...>
[
[
1,
2
],
[
2,
3
],
[
3,
4
]
]
>>> # use a null mask to represent null values
>>> mask = pa.array([False, True, False])
>>> pa.LargeListViewArray.from_arrays(offsets, sizes, values, mask=mask)
<pyarrow.lib.LargeListViewArray object at ...>
[
[
1,
2
],
null,
[
3,
4
]
]
>>> # null values can be defined in either offsets or sizes arrays
>>> # WARNING: this will result in a copy of the offsets or sizes arrays
>>> offsets = pa.array([0, None, 2])
>>> pa.LargeListViewArray.from_arrays(offsets, sizes, values)
<pyarrow.lib.LargeListViewArray object at ...>
[
[
1,
2
],
null,
[
3,
4
]
]
"""
cdef:
Array _offsets, _sizes, _values
shared_ptr[CArray] out
shared_ptr[CBuffer] c_mask
CMemoryPool* cpool = maybe_unbox_memory_pool(pool)
_offsets = asarray(offsets, type='int64')
_sizes = asarray(sizes, type='int64')
_values = asarray(values)
c_mask = c_mask_inverted_from_obj(mask, pool)
if type is not None:
with nogil:
out = GetResultValue(
CLargeListViewArray.FromArraysAndType(
type.sp_type, _offsets.ap[0], _sizes.ap[0], _values.ap[0], cpool, c_mask))
else:
with nogil:
out = GetResultValue(
CLargeListViewArray.FromArrays(
_offsets.ap[0], _sizes.ap[0], _values.ap[0], cpool, c_mask))
cdef Array result = pyarrow_wrap_array(out)
result.validate()
return result
@property
def values(self):
"""
Return the underlying array of values which backs the LargeListArray
ignoring the array's offset.
The values array may be out of order and/or contain additional values
that are not found in the logical representation of the array. The only
guarantee is that each non-null value in the ListView Array is contiguous.
Compare with :meth:`flatten`, which returns only the non-null
values taking into consideration the array's order and offset.
Returns
-------
values : Array
See Also
--------
LargeListArray.flatten : ...
Examples
--------
The values include null elements from sub-lists:
>>> import pyarrow as pa
>>> values = [1, 2, None, 3, 4]
>>> offsets = [0, 0, 1]
>>> sizes = [2, 0, 4]
>>> array = pa.LargeListViewArray.from_arrays(offsets, sizes, values)
>>> array
<pyarrow.lib.LargeListViewArray object at ...>
[
[
1,
2
],
[],
[
2,
null,
3,
4
]
]
>>> array.values
<pyarrow.lib.Int64Array object at ...>
[
1,
2,
null,
3,
4
]
"""
cdef CLargeListViewArray* arr = <CLargeListViewArray*> self.ap
return pyarrow_wrap_array(arr.values())
@property
def offsets(self):
"""
Return the list view offsets as an int64 array.
The returned array will not have a validity bitmap, so you cannot
expect to pass it to `LargeListViewArray.from_arrays` and get back the
same list array if the original one has nulls.
Returns
-------
offsets : Int64Array
Examples
--------
>>> import pyarrow as pa
>>> values = [1, 2, None, 3, 4]
>>> offsets = [0, 0, 1]
>>> sizes = [2, 0, 4]
>>> array = pa.LargeListViewArray.from_arrays(offsets, sizes, values)
>>> array.offsets
<pyarrow.lib.Int64Array object at ...>
[
0,
0,
1
]
"""
return pyarrow_wrap_array((<CLargeListViewArray*> self.ap).offsets())
@property
def sizes(self):
"""
Return the list view sizes as an int64 array.
The returned array will not have a validity bitmap, so you cannot
expect to pass it to `LargeListViewArray.from_arrays` and get back the
same list array if the original one has nulls.
Returns
-------
sizes : Int64Array
Examples
--------
>>> import pyarrow as pa
>>> values = [1, 2, None, 3, 4]
>>> offsets = [0, 0, 1]
>>> sizes = [2, 0, 4]
>>> array = pa.LargeListViewArray.from_arrays(offsets, sizes, values)
>>> array.sizes
<pyarrow.lib.Int64Array object at ...>
[
2,
0,
4
]
"""
return pyarrow_wrap_array((<CLargeListViewArray*> self.ap).sizes())
cdef class MapArray(ListArray):
ARROW-6904: [Python] Add support for MapArray This adds support for `MapArray` in Python with conversion from a Python sequence of either dictionaries with "key" and "value" fields or a tuple with 2 elements. Additionally, added the API `MapArray.from_arrays` to build a `MapArray` from individual offset, key, value arrays. Closes #5774 from BryanCutler/python-impl-MapArray-ARROW-6904 and squashes the following commits: f2935c378 <Bryan Cutler> Avoid lookup of key_builder at each value c1fa14ef5 <Bryan Cutler> Added MapArray decl to lib.pxd 6b23dc2aa <Bryan Cutler> typo 5385529ff <Bryan Cutler> Address comments, add test compare with ListBuilder of structs 9772e1df4 <Bryan Cutler> unicode repr for py2 f1a354764 <Bryan Cutler> Fix test_map error for py2 7bdfdebb6 <Bryan Cutler> Changed MapValue.as_py() to return a list of tuples, added test_scalars 3c1a7f85a <Bryan Cutler> Add MapType to schema_test 2f4c29652 <Bryan Cutler> Add Map tests to test_misc acd0e6b04 <Bryan Cutler> Add tests for python MapType 2555849c4 <Bryan Cutler> Add tests for MapArray::FromArrays 99ba44f0f <Bryan Cutler> Fix python2 test error 442dac2bc <Bryan Cutler> Fix lint issues 3a1134d01 <Bryan Cutler> Added checks in MapConverter to verify appended value, passing tests fa883cc9e <Bryan Cutler> Added tests for python to arrow conversion, need to pass verify dicts 70b453db6 <Bryan Cutler> Added test_array using from_arrays 72ab5295c <Bryan Cutler> Fix MapArray.from_arrays to work with null values 7f8770140 <Bryan Cutler> Adding map converter as ListConverter with MapType 309ac112b <Bryan Cutler> Change MapBuilder to use a StructBuilder internally 033479de8 <Bryan Cutler> Fix MapArray::SetData to use ListArray::SetData without faking type cf6a4fb72 <Bryan Cutler> Added MapType and MapArray, working in python with FromArrays Authored-by: Bryan Cutler <cutlerb@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-12-04 20:27:27 +01:00
"""
Concrete class for Arrow arrays of a map data type.
"""
@staticmethod
def from_arrays(offsets, keys, items, DataType type=None, MemoryPool pool=None, mask=None):
ARROW-6904: [Python] Add support for MapArray This adds support for `MapArray` in Python with conversion from a Python sequence of either dictionaries with "key" and "value" fields or a tuple with 2 elements. Additionally, added the API `MapArray.from_arrays` to build a `MapArray` from individual offset, key, value arrays. Closes #5774 from BryanCutler/python-impl-MapArray-ARROW-6904 and squashes the following commits: f2935c378 <Bryan Cutler> Avoid lookup of key_builder at each value c1fa14ef5 <Bryan Cutler> Added MapArray decl to lib.pxd 6b23dc2aa <Bryan Cutler> typo 5385529ff <Bryan Cutler> Address comments, add test compare with ListBuilder of structs 9772e1df4 <Bryan Cutler> unicode repr for py2 f1a354764 <Bryan Cutler> Fix test_map error for py2 7bdfdebb6 <Bryan Cutler> Changed MapValue.as_py() to return a list of tuples, added test_scalars 3c1a7f85a <Bryan Cutler> Add MapType to schema_test 2f4c29652 <Bryan Cutler> Add Map tests to test_misc acd0e6b04 <Bryan Cutler> Add tests for python MapType 2555849c4 <Bryan Cutler> Add tests for MapArray::FromArrays 99ba44f0f <Bryan Cutler> Fix python2 test error 442dac2bc <Bryan Cutler> Fix lint issues 3a1134d01 <Bryan Cutler> Added checks in MapConverter to verify appended value, passing tests fa883cc9e <Bryan Cutler> Added tests for python to arrow conversion, need to pass verify dicts 70b453db6 <Bryan Cutler> Added test_array using from_arrays 72ab5295c <Bryan Cutler> Fix MapArray.from_arrays to work with null values 7f8770140 <Bryan Cutler> Adding map converter as ListConverter with MapType 309ac112b <Bryan Cutler> Change MapBuilder to use a StructBuilder internally 033479de8 <Bryan Cutler> Fix MapArray::SetData to use ListArray::SetData without faking type cf6a4fb72 <Bryan Cutler> Added MapType and MapArray, working in python with FromArrays Authored-by: Bryan Cutler <cutlerb@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-12-04 20:27:27 +01:00
"""
Construct MapArray from arrays of int32 offsets and key, item arrays.
ARROW-6904: [Python] Add support for MapArray This adds support for `MapArray` in Python with conversion from a Python sequence of either dictionaries with "key" and "value" fields or a tuple with 2 elements. Additionally, added the API `MapArray.from_arrays` to build a `MapArray` from individual offset, key, value arrays. Closes #5774 from BryanCutler/python-impl-MapArray-ARROW-6904 and squashes the following commits: f2935c378 <Bryan Cutler> Avoid lookup of key_builder at each value c1fa14ef5 <Bryan Cutler> Added MapArray decl to lib.pxd 6b23dc2aa <Bryan Cutler> typo 5385529ff <Bryan Cutler> Address comments, add test compare with ListBuilder of structs 9772e1df4 <Bryan Cutler> unicode repr for py2 f1a354764 <Bryan Cutler> Fix test_map error for py2 7bdfdebb6 <Bryan Cutler> Changed MapValue.as_py() to return a list of tuples, added test_scalars 3c1a7f85a <Bryan Cutler> Add MapType to schema_test 2f4c29652 <Bryan Cutler> Add Map tests to test_misc acd0e6b04 <Bryan Cutler> Add tests for python MapType 2555849c4 <Bryan Cutler> Add tests for MapArray::FromArrays 99ba44f0f <Bryan Cutler> Fix python2 test error 442dac2bc <Bryan Cutler> Fix lint issues 3a1134d01 <Bryan Cutler> Added checks in MapConverter to verify appended value, passing tests fa883cc9e <Bryan Cutler> Added tests for python to arrow conversion, need to pass verify dicts 70b453db6 <Bryan Cutler> Added test_array using from_arrays 72ab5295c <Bryan Cutler> Fix MapArray.from_arrays to work with null values 7f8770140 <Bryan Cutler> Adding map converter as ListConverter with MapType 309ac112b <Bryan Cutler> Change MapBuilder to use a StructBuilder internally 033479de8 <Bryan Cutler> Fix MapArray::SetData to use ListArray::SetData without faking type cf6a4fb72 <Bryan Cutler> Added MapType and MapArray, working in python with FromArrays Authored-by: Bryan Cutler <cutlerb@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-12-04 20:27:27 +01:00
Parameters
----------
offsets : array-like or sequence (int32 type)
keys : array-like or sequence (any type)
items : array-like or sequence (any type)
type : DataType, optional
If not specified, a default MapArray with the keys' and items' type is used.
pool : MemoryPool
mask : Array (boolean type), optional
Indicate which values are null (True) or not null (False).
ARROW-6904: [Python] Add support for MapArray This adds support for `MapArray` in Python with conversion from a Python sequence of either dictionaries with "key" and "value" fields or a tuple with 2 elements. Additionally, added the API `MapArray.from_arrays` to build a `MapArray` from individual offset, key, value arrays. Closes #5774 from BryanCutler/python-impl-MapArray-ARROW-6904 and squashes the following commits: f2935c378 <Bryan Cutler> Avoid lookup of key_builder at each value c1fa14ef5 <Bryan Cutler> Added MapArray decl to lib.pxd 6b23dc2aa <Bryan Cutler> typo 5385529ff <Bryan Cutler> Address comments, add test compare with ListBuilder of structs 9772e1df4 <Bryan Cutler> unicode repr for py2 f1a354764 <Bryan Cutler> Fix test_map error for py2 7bdfdebb6 <Bryan Cutler> Changed MapValue.as_py() to return a list of tuples, added test_scalars 3c1a7f85a <Bryan Cutler> Add MapType to schema_test 2f4c29652 <Bryan Cutler> Add Map tests to test_misc acd0e6b04 <Bryan Cutler> Add tests for python MapType 2555849c4 <Bryan Cutler> Add tests for MapArray::FromArrays 99ba44f0f <Bryan Cutler> Fix python2 test error 442dac2bc <Bryan Cutler> Fix lint issues 3a1134d01 <Bryan Cutler> Added checks in MapConverter to verify appended value, passing tests fa883cc9e <Bryan Cutler> Added tests for python to arrow conversion, need to pass verify dicts 70b453db6 <Bryan Cutler> Added test_array using from_arrays 72ab5295c <Bryan Cutler> Fix MapArray.from_arrays to work with null values 7f8770140 <Bryan Cutler> Adding map converter as ListConverter with MapType 309ac112b <Bryan Cutler> Change MapBuilder to use a StructBuilder internally 033479de8 <Bryan Cutler> Fix MapArray::SetData to use ListArray::SetData without faking type cf6a4fb72 <Bryan Cutler> Added MapType and MapArray, working in python with FromArrays Authored-by: Bryan Cutler <cutlerb@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-12-04 20:27:27 +01:00
Returns
-------
map_array : MapArray
Examples
--------
First, let's understand the structure of our dataset when viewed in a rectangular data model.
The total of 5 respondents answered the question "How much did you like the movie x?".
The value -1 in the integer array means that the value is missing. The boolean array
represents the null bitmask corresponding to the missing values in the integer array.
>>> import pyarrow as pa
>>> movies_rectangular = np.ma.masked_array([
... [10, -1, -1],
... [8, 4, 5],
... [-1, 10, 3],
... [-1, -1, -1],
... [-1, -1, -1]
... ],
... [
... [False, True, True],
... [False, False, False],
... [True, False, False],
... [True, True, True],
... [True, True, True],
... ])
To represent the same data with the MapArray and from_arrays, the data is
formed like this:
>>> offsets = [
... 0, # -- row 1 start
... 1, # -- row 2 start
... 4, # -- row 3 start
... 6, # -- row 4 start
... 6, # -- row 5 start
... 6, # -- row 5 end
... ]
>>> movies = [
... "Dark Knight", # ---------------------------------- row 1
... "Dark Knight", "Meet the Parents", "Superman", # -- row 2
... "Meet the Parents", "Superman", # ----------------- row 3
... ]
>>> likings = [
... 10, # -------- row 1
... 8, 4, 5, # --- row 2
... 10, 3 # ------ row 3
... ]
>>> pa.MapArray.from_arrays(offsets, movies, likings).to_pandas()
0 [(Dark Knight, 10)]
1 [(Dark Knight, 8), (Meet the Parents, 4), (Sup...
2 [(Meet the Parents, 10), (Superman, 3)]
3 []
4 []
dtype: object
If the data in the empty rows needs to be marked as missing, it's possible
to do so by modifying the offsets argument, so that we specify `None` as
the starting positions of the rows we want marked as missing. The end row
offset still has to refer to the existing value from keys (and values):
>>> offsets = [
... 0, # ----- row 1 start
... 1, # ----- row 2 start
... 4, # ----- row 3 start
... None, # -- row 4 start
... None, # -- row 5 start
... 6, # ----- row 5 end
... ]
>>> pa.MapArray.from_arrays(offsets, movies, likings).to_pandas()
0 [(Dark Knight, 10)]
1 [(Dark Knight, 8), (Meet the Parents, 4), (Sup...
2 [(Meet the Parents, 10), (Superman, 3)]
3 None
4 None
dtype: object
ARROW-6904: [Python] Add support for MapArray This adds support for `MapArray` in Python with conversion from a Python sequence of either dictionaries with "key" and "value" fields or a tuple with 2 elements. Additionally, added the API `MapArray.from_arrays` to build a `MapArray` from individual offset, key, value arrays. Closes #5774 from BryanCutler/python-impl-MapArray-ARROW-6904 and squashes the following commits: f2935c378 <Bryan Cutler> Avoid lookup of key_builder at each value c1fa14ef5 <Bryan Cutler> Added MapArray decl to lib.pxd 6b23dc2aa <Bryan Cutler> typo 5385529ff <Bryan Cutler> Address comments, add test compare with ListBuilder of structs 9772e1df4 <Bryan Cutler> unicode repr for py2 f1a354764 <Bryan Cutler> Fix test_map error for py2 7bdfdebb6 <Bryan Cutler> Changed MapValue.as_py() to return a list of tuples, added test_scalars 3c1a7f85a <Bryan Cutler> Add MapType to schema_test 2f4c29652 <Bryan Cutler> Add Map tests to test_misc acd0e6b04 <Bryan Cutler> Add tests for python MapType 2555849c4 <Bryan Cutler> Add tests for MapArray::FromArrays 99ba44f0f <Bryan Cutler> Fix python2 test error 442dac2bc <Bryan Cutler> Fix lint issues 3a1134d01 <Bryan Cutler> Added checks in MapConverter to verify appended value, passing tests fa883cc9e <Bryan Cutler> Added tests for python to arrow conversion, need to pass verify dicts 70b453db6 <Bryan Cutler> Added test_array using from_arrays 72ab5295c <Bryan Cutler> Fix MapArray.from_arrays to work with null values 7f8770140 <Bryan Cutler> Adding map converter as ListConverter with MapType 309ac112b <Bryan Cutler> Change MapBuilder to use a StructBuilder internally 033479de8 <Bryan Cutler> Fix MapArray::SetData to use ListArray::SetData without faking type cf6a4fb72 <Bryan Cutler> Added MapType and MapArray, working in python with FromArrays Authored-by: Bryan Cutler <cutlerb@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-12-04 20:27:27 +01:00
"""
cdef:
Array _offsets, _keys, _items
shared_ptr[CArray] out
shared_ptr[CBuffer] c_mask
ARROW-6904: [Python] Add support for MapArray This adds support for `MapArray` in Python with conversion from a Python sequence of either dictionaries with "key" and "value" fields or a tuple with 2 elements. Additionally, added the API `MapArray.from_arrays` to build a `MapArray` from individual offset, key, value arrays. Closes #5774 from BryanCutler/python-impl-MapArray-ARROW-6904 and squashes the following commits: f2935c378 <Bryan Cutler> Avoid lookup of key_builder at each value c1fa14ef5 <Bryan Cutler> Added MapArray decl to lib.pxd 6b23dc2aa <Bryan Cutler> typo 5385529ff <Bryan Cutler> Address comments, add test compare with ListBuilder of structs 9772e1df4 <Bryan Cutler> unicode repr for py2 f1a354764 <Bryan Cutler> Fix test_map error for py2 7bdfdebb6 <Bryan Cutler> Changed MapValue.as_py() to return a list of tuples, added test_scalars 3c1a7f85a <Bryan Cutler> Add MapType to schema_test 2f4c29652 <Bryan Cutler> Add Map tests to test_misc acd0e6b04 <Bryan Cutler> Add tests for python MapType 2555849c4 <Bryan Cutler> Add tests for MapArray::FromArrays 99ba44f0f <Bryan Cutler> Fix python2 test error 442dac2bc <Bryan Cutler> Fix lint issues 3a1134d01 <Bryan Cutler> Added checks in MapConverter to verify appended value, passing tests fa883cc9e <Bryan Cutler> Added tests for python to arrow conversion, need to pass verify dicts 70b453db6 <Bryan Cutler> Added test_array using from_arrays 72ab5295c <Bryan Cutler> Fix MapArray.from_arrays to work with null values 7f8770140 <Bryan Cutler> Adding map converter as ListConverter with MapType 309ac112b <Bryan Cutler> Change MapBuilder to use a StructBuilder internally 033479de8 <Bryan Cutler> Fix MapArray::SetData to use ListArray::SetData without faking type cf6a4fb72 <Bryan Cutler> Added MapType and MapArray, working in python with FromArrays Authored-by: Bryan Cutler <cutlerb@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-12-04 20:27:27 +01:00
cdef CMemoryPool* cpool = maybe_unbox_memory_pool(pool)
_offsets = asarray(offsets, type='int32')
_keys = asarray(keys)
_items = asarray(items)
c_mask = c_mask_inverted_from_obj(mask, pool)
if type is not None:
with nogil:
out = GetResultValue(
CMapArray.FromArraysAndType(
type.sp_type, _offsets.sp_array,
_keys.sp_array, _items.sp_array, cpool, c_mask))
else:
with nogil:
out = GetResultValue(
CMapArray.FromArrays(_offsets.sp_array,
_keys.sp_array,
_items.sp_array, cpool, c_mask))
ARROW-6904: [Python] Add support for MapArray This adds support for `MapArray` in Python with conversion from a Python sequence of either dictionaries with "key" and "value" fields or a tuple with 2 elements. Additionally, added the API `MapArray.from_arrays` to build a `MapArray` from individual offset, key, value arrays. Closes #5774 from BryanCutler/python-impl-MapArray-ARROW-6904 and squashes the following commits: f2935c378 <Bryan Cutler> Avoid lookup of key_builder at each value c1fa14ef5 <Bryan Cutler> Added MapArray decl to lib.pxd 6b23dc2aa <Bryan Cutler> typo 5385529ff <Bryan Cutler> Address comments, add test compare with ListBuilder of structs 9772e1df4 <Bryan Cutler> unicode repr for py2 f1a354764 <Bryan Cutler> Fix test_map error for py2 7bdfdebb6 <Bryan Cutler> Changed MapValue.as_py() to return a list of tuples, added test_scalars 3c1a7f85a <Bryan Cutler> Add MapType to schema_test 2f4c29652 <Bryan Cutler> Add Map tests to test_misc acd0e6b04 <Bryan Cutler> Add tests for python MapType 2555849c4 <Bryan Cutler> Add tests for MapArray::FromArrays 99ba44f0f <Bryan Cutler> Fix python2 test error 442dac2bc <Bryan Cutler> Fix lint issues 3a1134d01 <Bryan Cutler> Added checks in MapConverter to verify appended value, passing tests fa883cc9e <Bryan Cutler> Added tests for python to arrow conversion, need to pass verify dicts 70b453db6 <Bryan Cutler> Added test_array using from_arrays 72ab5295c <Bryan Cutler> Fix MapArray.from_arrays to work with null values 7f8770140 <Bryan Cutler> Adding map converter as ListConverter with MapType 309ac112b <Bryan Cutler> Change MapBuilder to use a StructBuilder internally 033479de8 <Bryan Cutler> Fix MapArray::SetData to use ListArray::SetData without faking type cf6a4fb72 <Bryan Cutler> Added MapType and MapArray, working in python with FromArrays Authored-by: Bryan Cutler <cutlerb@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-12-04 20:27:27 +01:00
cdef Array result = pyarrow_wrap_array(out)
result.validate()
return result
@property
def keys(self):
"""Flattened array of keys across all maps in array"""
ARROW-6904: [Python] Add support for MapArray This adds support for `MapArray` in Python with conversion from a Python sequence of either dictionaries with "key" and "value" fields or a tuple with 2 elements. Additionally, added the API `MapArray.from_arrays` to build a `MapArray` from individual offset, key, value arrays. Closes #5774 from BryanCutler/python-impl-MapArray-ARROW-6904 and squashes the following commits: f2935c378 <Bryan Cutler> Avoid lookup of key_builder at each value c1fa14ef5 <Bryan Cutler> Added MapArray decl to lib.pxd 6b23dc2aa <Bryan Cutler> typo 5385529ff <Bryan Cutler> Address comments, add test compare with ListBuilder of structs 9772e1df4 <Bryan Cutler> unicode repr for py2 f1a354764 <Bryan Cutler> Fix test_map error for py2 7bdfdebb6 <Bryan Cutler> Changed MapValue.as_py() to return a list of tuples, added test_scalars 3c1a7f85a <Bryan Cutler> Add MapType to schema_test 2f4c29652 <Bryan Cutler> Add Map tests to test_misc acd0e6b04 <Bryan Cutler> Add tests for python MapType 2555849c4 <Bryan Cutler> Add tests for MapArray::FromArrays 99ba44f0f <Bryan Cutler> Fix python2 test error 442dac2bc <Bryan Cutler> Fix lint issues 3a1134d01 <Bryan Cutler> Added checks in MapConverter to verify appended value, passing tests fa883cc9e <Bryan Cutler> Added tests for python to arrow conversion, need to pass verify dicts 70b453db6 <Bryan Cutler> Added test_array using from_arrays 72ab5295c <Bryan Cutler> Fix MapArray.from_arrays to work with null values 7f8770140 <Bryan Cutler> Adding map converter as ListConverter with MapType 309ac112b <Bryan Cutler> Change MapBuilder to use a StructBuilder internally 033479de8 <Bryan Cutler> Fix MapArray::SetData to use ListArray::SetData without faking type cf6a4fb72 <Bryan Cutler> Added MapType and MapArray, working in python with FromArrays Authored-by: Bryan Cutler <cutlerb@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-12-04 20:27:27 +01:00
return pyarrow_wrap_array((<CMapArray*> self.ap).keys())
@property
def items(self):
"""Flattened array of items across all maps in array"""
ARROW-6904: [Python] Add support for MapArray This adds support for `MapArray` in Python with conversion from a Python sequence of either dictionaries with "key" and "value" fields or a tuple with 2 elements. Additionally, added the API `MapArray.from_arrays` to build a `MapArray` from individual offset, key, value arrays. Closes #5774 from BryanCutler/python-impl-MapArray-ARROW-6904 and squashes the following commits: f2935c378 <Bryan Cutler> Avoid lookup of key_builder at each value c1fa14ef5 <Bryan Cutler> Added MapArray decl to lib.pxd 6b23dc2aa <Bryan Cutler> typo 5385529ff <Bryan Cutler> Address comments, add test compare with ListBuilder of structs 9772e1df4 <Bryan Cutler> unicode repr for py2 f1a354764 <Bryan Cutler> Fix test_map error for py2 7bdfdebb6 <Bryan Cutler> Changed MapValue.as_py() to return a list of tuples, added test_scalars 3c1a7f85a <Bryan Cutler> Add MapType to schema_test 2f4c29652 <Bryan Cutler> Add Map tests to test_misc acd0e6b04 <Bryan Cutler> Add tests for python MapType 2555849c4 <Bryan Cutler> Add tests for MapArray::FromArrays 99ba44f0f <Bryan Cutler> Fix python2 test error 442dac2bc <Bryan Cutler> Fix lint issues 3a1134d01 <Bryan Cutler> Added checks in MapConverter to verify appended value, passing tests fa883cc9e <Bryan Cutler> Added tests for python to arrow conversion, need to pass verify dicts 70b453db6 <Bryan Cutler> Added test_array using from_arrays 72ab5295c <Bryan Cutler> Fix MapArray.from_arrays to work with null values 7f8770140 <Bryan Cutler> Adding map converter as ListConverter with MapType 309ac112b <Bryan Cutler> Change MapBuilder to use a StructBuilder internally 033479de8 <Bryan Cutler> Fix MapArray::SetData to use ListArray::SetData without faking type cf6a4fb72 <Bryan Cutler> Added MapType and MapArray, working in python with FromArrays Authored-by: Bryan Cutler <cutlerb@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-12-04 20:27:27 +01:00
return pyarrow_wrap_array((<CMapArray*> self.ap).items())
cdef class FixedSizeListArray(BaseListArray):
"""
Concrete class for Arrow arrays of a fixed size list data type.
"""
@staticmethod
def from_arrays(values, list_size=None, DataType type=None, mask=None):
"""
Construct FixedSizeListArray from array of values and a list length.
Parameters
----------
values : Array (any type)
list_size : int
The fixed length of the lists.
type : DataType, optional
If not specified, a default ListType with the values' type and
`list_size` length is used.
mask : Array (boolean type), optional
Indicate which values are null (True) or not null (False).
Returns
-------
FixedSizeListArray
Examples
--------
Create from a values array and a list size:
>>> import pyarrow as pa
>>> values = pa.array([1, 2, 3, 4])
>>> arr = pa.FixedSizeListArray.from_arrays(values, 2)
>>> arr
<pyarrow.lib.FixedSizeListArray object at ...>
[
[
1,
2
],
[
3,
4
]
]
Or create from a values array, list size and matching type:
>>> typ = pa.list_(pa.field("values", pa.int64()), 2)
>>> arr = pa.FixedSizeListArray.from_arrays(values,type=typ)
>>> arr
<pyarrow.lib.FixedSizeListArray object at ...>
[
[
1,
2
],
[
3,
4
]
]
"""
cdef:
Array _values
int32_t _list_size
CResult[shared_ptr[CArray]] c_result
_values = asarray(values)
c_mask = c_mask_inverted_from_obj(mask, None)
if type is not None:
if list_size is not None:
raise ValueError("Cannot specify both list_size and type")
with nogil:
c_result = CFixedSizeListArray.FromArraysAndType(
_values.sp_array, type.sp_type, c_mask)
else:
if list_size is None:
raise ValueError("Should specify one of list_size and type")
_list_size = <int32_t>list_size
with nogil:
c_result = CFixedSizeListArray.FromArrays(
_values.sp_array, _list_size, c_mask)
cdef Array result = pyarrow_wrap_array(GetResultValue(c_result))
result.validate()
return result
@property
def values(self):
"""
Return the underlying array of values which backs the
FixedSizeListArray ignoring the array's offset.
Note even null elements are included.
Compare with :meth:`flatten`, which returns only the non-null
sub-list values.
Returns
-------
values : Array
See Also
--------
FixedSizeListArray.flatten : ...
Examples
--------
>>> import pyarrow as pa
>>> array = pa.array(
... [[1, 2], None, [3, None]],
... type=pa.list_(pa.int32(), 2)
... )
>>> array.values
<pyarrow.lib.Int32Array object at ...>
[
1,
2,
null,
null,
3,
null
]
"""
cdef CFixedSizeListArray* arr = <CFixedSizeListArray*> self.ap
return pyarrow_wrap_array(arr.values())
cdef class UnionArray(Array):
"""
Concrete class for Arrow arrays of a Union data type.
"""
def child(self, int pos):
"""
DEPRECATED, use field() instead.
Parameters
----------
pos : int
The physical index of the union child field (not its type code).
Returns
-------
field : pyarrow.Field
The given child field.
"""
import warnings
warnings.warn("child is deprecated, use field", FutureWarning)
return self.field(pos)
def field(self, int pos):
"""
Return the given child field as an individual array.
For sparse unions, the returned array has its offset, length,
and null count adjusted.
For dense unions, the returned array is unchanged.
Parameters
----------
pos : int
The physical index of the union child field (not its type code).
Returns
-------
field : Array
The given child field.
"""
cdef shared_ptr[CArray] result
result = (<CUnionArray*> self.ap).field(pos)
if result != NULL:
return pyarrow_wrap_array(result)
raise KeyError(f"UnionArray does not have child {pos}")
@property
def type_codes(self):
"""Get the type codes array."""
buf = pyarrow_wrap_buffer((<CUnionArray*> self.ap).type_codes())
return Array.from_buffers(int8(), len(self), [None, buf])
@property
def offsets(self):
"""
Get the value offsets array (dense arrays only).
Does not account for any slice offset.
"""
if self.type.mode != "dense":
raise ArrowTypeError("Can only get value offsets for dense arrays")
cdef CDenseUnionArray* dense = <CDenseUnionArray*> self.ap
buf = pyarrow_wrap_buffer(dense.value_offsets())
return Array.from_buffers(int32(), len(self), [None, buf])
@staticmethod
def from_dense(Array types, Array value_offsets, list children,
list field_names=None, list type_codes=None):
"""
Construct dense UnionArray from arrays of int8 types, int32 offsets and
children arrays
Parameters
----------
types : Array (int8 type)
value_offsets : Array (int32 type)
children : list
field_names : list
type_codes : list
Returns
-------
union_array : UnionArray
"""
cdef:
shared_ptr[CArray] out
vector[shared_ptr[CArray]] c
Array child
vector[c_string] c_field_names
vector[int8_t] c_type_codes
for child in children:
c.push_back(child.sp_array)
if field_names is not None:
for x in field_names:
c_field_names.push_back(tobytes(x))
if type_codes is not None:
for x in type_codes:
c_type_codes.push_back(x)
with nogil:
out = GetResultValue(CDenseUnionArray.Make(
deref(types.ap), deref(value_offsets.ap), c, c_field_names,
c_type_codes))
cdef Array result = pyarrow_wrap_array(out)
result.validate()
return result
@staticmethod
def from_sparse(Array types, list children, list field_names=None,
list type_codes=None):
"""
Construct sparse UnionArray from arrays of int8 types and children
arrays
Parameters
----------
types : Array (int8 type)
children : list
field_names : list
type_codes : list
Returns
-------
union_array : UnionArray
"""
cdef:
shared_ptr[CArray] out
vector[shared_ptr[CArray]] c
Array child
vector[c_string] c_field_names
vector[int8_t] c_type_codes
for child in children:
c.push_back(child.sp_array)
if field_names is not None:
for x in field_names:
c_field_names.push_back(tobytes(x))
if type_codes is not None:
for x in type_codes:
c_type_codes.push_back(x)
with nogil:
out = GetResultValue(CSparseUnionArray.Make(
deref(types.ap), c, c_field_names, c_type_codes))
cdef Array result = pyarrow_wrap_array(out)
result.validate()
return result
cdef class StringArray(Array):
"""
Concrete class for Arrow arrays of string (or utf8) data type.
"""
@staticmethod
def from_buffers(int length, Buffer value_offsets, Buffer data,
Buffer null_bitmap=None, int null_count=-1,
int offset=0):
"""
Construct a StringArray from value_offsets and data buffers.
If there are nulls in the data, also a null_bitmap and the matching
null_count must be passed.
Parameters
----------
length : int
value_offsets : Buffer
data : Buffer
null_bitmap : Buffer, optional
null_count : int, default 0
offset : int, default 0
Returns
-------
string_array : StringArray
"""
return Array.from_buffers(utf8(), length,
[null_bitmap, value_offsets, data],
null_count, offset)
cdef class LargeStringArray(Array):
"""
Concrete class for Arrow arrays of large string (or utf8) data type.
"""
@staticmethod
def from_buffers(int length, Buffer value_offsets, Buffer data,
Buffer null_bitmap=None, int null_count=-1,
int offset=0):
"""
Construct a LargeStringArray from value_offsets and data buffers.
If there are nulls in the data, also a null_bitmap and the matching
null_count must be passed.
Parameters
----------
length : int
value_offsets : Buffer
data : Buffer
null_bitmap : Buffer, optional
null_count : int, default 0
offset : int, default 0
Returns
-------
string_array : StringArray
"""
return Array.from_buffers(large_utf8(), length,
[null_bitmap, value_offsets, data],
null_count, offset)
cdef class StringViewArray(Array):
"""
Concrete class for Arrow arrays of string (or utf8) view data type.
"""
cdef class BinaryArray(Array):
"""
Concrete class for Arrow arrays of variable-sized binary data type.
"""
@property
def total_values_length(self):
"""
The number of bytes from beginning to end of the data buffer addressed
by the offsets of this BinaryArray.
"""
return (<CBinaryArray*> self.ap).total_values_length()
cdef class LargeBinaryArray(Array):
"""
Concrete class for Arrow arrays of large variable-sized binary data type.
"""
@property
def total_values_length(self):
"""
The number of bytes from beginning to end of the data buffer addressed
by the offsets of this LargeBinaryArray.
"""
return (<CLargeBinaryArray*> self.ap).total_values_length()
cdef class BinaryViewArray(Array):
"""
Concrete class for Arrow arrays of variable-sized binary view data type.
"""
cdef class DictionaryArray(Array):
"""
Concrete class for dictionary-encoded Arrow arrays.
"""
ARROW-1559: [C++] Add Unique kernel and refactor DictionaryBuilder to be a stateful kernel Only intended to implement selective categorical conversion in `to_pandas()` but it seems that there is a lot missing to do this in a clean fashion. Author: Wes McKinney <wes.mckinney@twosigma.com> Closes #1266 from xhochy/ARROW-1559 and squashes the following commits: 50249652 [Wes McKinney] Fix MSVC linker issue b6cb1ece [Wes McKinney] Export CastOptions 4ea3ce61 [Wes McKinney] Return NONE Datum in else branch of functions 4f969c6b [Wes McKinney] Move deprecation suppression after flag munging 7f557cc0 [Wes McKinney] Code review comments, disable C4996 warning (equivalent to -Wno-deprecated) in MSVC builds 84717461 [Wes McKinney] Do not compute hash table threshold on each iteration ae8f2339 [Wes McKinney] Fix double to int64_t conversion warning c1444a26 [Wes McKinney] Fix doxygen warnings 2de85961 [Wes McKinney] Add test cases for unique, dictionary_encode 383b46fd [Wes McKinney] Add Array methods for Unique, DictionaryEncode 0962f06b [Wes McKinney] Add cast method for Column, chunked_array and column factory functions 62c3cefd [Wes McKinney] Datum stubs 27151c47 [Wes McKinney] Implement Cast for chunked arrays, fix kernel implementation. Change kernel API to write to a single Datum 1bf2e2f4 [Wes McKinney] Fix bug with column using wrong type eaadc3e5 [Wes McKinney] Use macros to reduce code duplication in DoubleTableSize 6b4f8f3c [Wes McKinney] Fix datetime64->date32 casting error raised by refactor 2c77a19e [Wes McKinney] Some Decimal->Decimal128 renaming. Add DecimalType base class c07f91b3 [Wes McKinney] ARROW-1559: Add unique kernel
2017-11-17 18:29:49 -05:00
def dictionary_encode(self):
return self
def dictionary_decode(self):
"""
Decodes the DictionaryArray to an Array.
"""
return self.dictionary.take(self.indices)
@property
def dictionary(self):
cdef CDictionaryArray* darr = <CDictionaryArray*>(self.ap)
if self._dictionary is None:
self._dictionary = pyarrow_wrap_array(darr.dictionary())
return self._dictionary
@property
def indices(self):
cdef CDictionaryArray* darr = <CDictionaryArray*>(self.ap)
if self._indices is None:
self._indices = pyarrow_wrap_array(darr.indices())
return self._indices
@staticmethod
def from_buffers(DataType type, int64_t length, buffers, Array dictionary,
int64_t null_count=-1, int64_t offset=0):
"""
Construct a DictionaryArray from buffers.
Parameters
----------
type : pyarrow.DataType
length : int
The number of values in the array.
buffers : List[Buffer | None]
The buffers backing the indices array.
dictionary : pyarrow.Array, ndarray or pandas.Series
The array of values referenced by the indices.
null_count : int, default -1
The number of null entries in the indices array. Negative value means that
the null count is not known.
offset : int, default 0
The array's logical offset (in values, not in bytes) from the
start of each buffer.
Returns
-------
dict_array : DictionaryArray
"""
cdef:
vector[shared_ptr[CBuffer]] c_buffers
shared_ptr[CDataType] c_type
shared_ptr[CArrayData] c_data
shared_ptr[CArray] c_result
for buf in buffers:
c_buffers.push_back(pyarrow_unwrap_buffer(buf))
c_type = pyarrow_unwrap_data_type(type)
with nogil:
c_data = CArrayData.Make(
c_type, length, c_buffers, null_count, offset)
c_data.get().dictionary = dictionary.sp_array.get().data()
c_result.reset(new CDictionaryArray(c_data))
cdef Array result = pyarrow_wrap_array(c_result)
result.validate()
return result
@staticmethod
def from_arrays(indices, dictionary, mask=None, bint ordered=False,
bint from_pandas=False, bint safe=True,
MemoryPool memory_pool=None):
"""
Construct a DictionaryArray from indices and values.
Parameters
----------
indices : pyarrow.Array, numpy.ndarray or pandas.Series, int type
Non-negative integers referencing the dictionary values by zero
based index.
dictionary : pyarrow.Array, ndarray or pandas.Series
The array of values referenced by the indices.
mask : ndarray or pandas.Series, bool type
True values indicate that indices are actually null.
ordered : bool, default False
Set to True if the category values are ordered.
from_pandas : bool, default False
If True, the indices should be treated as though they originated in
a pandas.Categorical (null encoded as -1).
safe : bool, default True
If True, check that the dictionary indices are in range.
memory_pool : MemoryPool, default None
For memory allocations, if required, otherwise uses default pool.
Returns
-------
dict_array : DictionaryArray
"""
cdef:
Array _indices, _dictionary
shared_ptr[CDataType] c_type
shared_ptr[CArray] c_result
if isinstance(indices, Array):
if mask is not None:
raise NotImplementedError(
"mask not implemented with Arrow array inputs yet")
_indices = indices
else:
if from_pandas:
_indices = _codes_to_indices(indices, mask, None, memory_pool)
else:
_indices = array(indices, mask=mask, memory_pool=memory_pool)
if isinstance(dictionary, Array):
_dictionary = dictionary
else:
_dictionary = array(dictionary, memory_pool=memory_pool)
if not isinstance(_indices, IntegerArray):
raise ValueError('Indices must be integer type')
cdef c_bool c_ordered = ordered
c_type.reset(new CDictionaryType(_indices.type.sp_type,
ARROW-3144: [C++/Python] Move "dictionary" member from DictionaryType to ArrayData to allow for variable dictionaries This patch moves the dictionary member out of DictionaryType to a new member on the internal ArrayData structure. As a result, serializing and deserializing schemas requires only a single IPC message, and schemas have no knowledge of what the dictionary values are. The objective of this change is to correct a long-standing Arrow C++ design problem with dictionary-encoded arrays where the dictionary values must be known at schema construction time. This has plagued us all over the codebase: * In reading Parquet files, reading directly to DictionaryArray is not simple because each row group may have a different dictionary * In IPC streams, delta dictionaries (not yet implemented) would invalidate the pre-existing schema, causing subsequent RecordBatch objects to be incompatible * In Arrow Flight, schema negotiation requires the dictionaries to be sent, having possibly unbounded size. * Not possible to have different dictionaries in a ChunkedArray * In CSV files, converting columns to dictionary in parallel would require an expensive type unification The summary of what can be learned from this is: do not put data in type objects, only metadata. Dictionaries are data, not metadata. There are a number of unavoidable API changes (straightforward for library users to fix) but otherwise no functional difference in the library. As you can see the change is quite complex as significant parts of IPC read/write, JSON integration testing, and Flight needed to be reworked to alter the control flow around schema resolution and handling the first record batch. Key APIs changed * `DictionaryType` constructor requires a `DataType` for the dictionary value type instead of the dictionary itself. The `dictionary` factory method is correspondingly changed. The `dictionary` accessor method on `DictionaryType` is replaced with `value_type`. * `DictionaryArray` constructor and `DictionaryArray::FromArrays` must be passed the dictionary values as an additional argument. * `DictionaryMemo` is exposed in the public API as it is now required for granular interactions with IPC messages with such functions as `ipc::ReadSchema` and `ipc::ReadRecordBatch` * A `DictionaryMemo*` argument is added to several low-level public functions in `ipc/writer.h` and `ipc/reader.h` Some other incidental changes: * Because DictionaryType objects could be reused previous in Schemas, such dictionaries would be "deduplicated" in IPC messages in passing. This is no longer possible by the same trick, so dictionary reuse will have to be handled in a different way (I opened ARROW-5340 to investigate) * As a result of this, an integration test that featured dictionary reuse has been changed to not reuse dictionaries. Technically this is a regression, but I didn't want to block the patch over it * R is added to allow_failures in Travis CI for now Author: Wes McKinney <wesm+git@apache.org> Author: Kouhei Sutou <kou@clear-code.com> Author: Antoine Pitrou <antoine@python.org> Closes #4316 from wesm/ARROW-3144 and squashes the following commits: 9f1ccfbf4 <Kouhei Sutou> Follow DictionaryArray changes 89e274da5 <Wes McKinney> Do not reuse dictionaries in integration tests for now until more follow on work around this can be done f62819f5b <Wes McKinney> Support many fields referencing the same dictionary, fix integration tests 37e82b4da <Antoine Pitrou> Fix CUDA and Duration issues 037075083 <Wes McKinney> Add R to allow_failures for now bd04774e2 <Wes McKinney> Code review comments b1cc52e62 <Wes McKinney> Fix rest of Python unit tests, fix some incorrect code comments f1178b2a6 <Wes McKinney> Fix all but 3 Python unit tests ab7fc1741 <Wes McKinney> Fix up Cython compilation, haven't fixed unit tests yet though 6ce51ef79 <Wes McKinney> Get everything compiling again e23c578fd <Wes McKinney> Fix Parquet tests c73b2162f <Wes McKinney> arrow-tests all passing again, huzzah! 04d40e8e6 <Wes McKinney> Flat dictionary IPC test passing now 481f316dc <Wes McKinney> Get JSON integration tests passing again 77a43dc9f <Wes McKinney> Fix pretty_print-test f4ada6685 <Wes McKinney> array-tests compilers again 8276dce6c <Wes McKinney> libarrow compiles again 8ea0e260a <Wes McKinney> Refactor IPC read path for new paradigm a1afe879a <Wes McKinney> More refactoring to have correct logic in IPC paths, not yet done aed04304e <Wes McKinney> More refactoring, regularize some type names 6bd72f946 <Wes McKinney> Start porting changes 24f99f16b <Wes McKinney> Initial boilerplate
2019-05-17 11:40:55 -05:00
_dictionary.sp_array.get().type(),
c_ordered))
if safe:
with nogil:
c_result = GetResultValue(
CDictionaryArray.FromArrays(c_type, _indices.sp_array,
_dictionary.sp_array))
else:
ARROW-3144: [C++/Python] Move "dictionary" member from DictionaryType to ArrayData to allow for variable dictionaries This patch moves the dictionary member out of DictionaryType to a new member on the internal ArrayData structure. As a result, serializing and deserializing schemas requires only a single IPC message, and schemas have no knowledge of what the dictionary values are. The objective of this change is to correct a long-standing Arrow C++ design problem with dictionary-encoded arrays where the dictionary values must be known at schema construction time. This has plagued us all over the codebase: * In reading Parquet files, reading directly to DictionaryArray is not simple because each row group may have a different dictionary * In IPC streams, delta dictionaries (not yet implemented) would invalidate the pre-existing schema, causing subsequent RecordBatch objects to be incompatible * In Arrow Flight, schema negotiation requires the dictionaries to be sent, having possibly unbounded size. * Not possible to have different dictionaries in a ChunkedArray * In CSV files, converting columns to dictionary in parallel would require an expensive type unification The summary of what can be learned from this is: do not put data in type objects, only metadata. Dictionaries are data, not metadata. There are a number of unavoidable API changes (straightforward for library users to fix) but otherwise no functional difference in the library. As you can see the change is quite complex as significant parts of IPC read/write, JSON integration testing, and Flight needed to be reworked to alter the control flow around schema resolution and handling the first record batch. Key APIs changed * `DictionaryType` constructor requires a `DataType` for the dictionary value type instead of the dictionary itself. The `dictionary` factory method is correspondingly changed. The `dictionary` accessor method on `DictionaryType` is replaced with `value_type`. * `DictionaryArray` constructor and `DictionaryArray::FromArrays` must be passed the dictionary values as an additional argument. * `DictionaryMemo` is exposed in the public API as it is now required for granular interactions with IPC messages with such functions as `ipc::ReadSchema` and `ipc::ReadRecordBatch` * A `DictionaryMemo*` argument is added to several low-level public functions in `ipc/writer.h` and `ipc/reader.h` Some other incidental changes: * Because DictionaryType objects could be reused previous in Schemas, such dictionaries would be "deduplicated" in IPC messages in passing. This is no longer possible by the same trick, so dictionary reuse will have to be handled in a different way (I opened ARROW-5340 to investigate) * As a result of this, an integration test that featured dictionary reuse has been changed to not reuse dictionaries. Technically this is a regression, but I didn't want to block the patch over it * R is added to allow_failures in Travis CI for now Author: Wes McKinney <wesm+git@apache.org> Author: Kouhei Sutou <kou@clear-code.com> Author: Antoine Pitrou <antoine@python.org> Closes #4316 from wesm/ARROW-3144 and squashes the following commits: 9f1ccfbf4 <Kouhei Sutou> Follow DictionaryArray changes 89e274da5 <Wes McKinney> Do not reuse dictionaries in integration tests for now until more follow on work around this can be done f62819f5b <Wes McKinney> Support many fields referencing the same dictionary, fix integration tests 37e82b4da <Antoine Pitrou> Fix CUDA and Duration issues 037075083 <Wes McKinney> Add R to allow_failures for now bd04774e2 <Wes McKinney> Code review comments b1cc52e62 <Wes McKinney> Fix rest of Python unit tests, fix some incorrect code comments f1178b2a6 <Wes McKinney> Fix all but 3 Python unit tests ab7fc1741 <Wes McKinney> Fix up Cython compilation, haven't fixed unit tests yet though 6ce51ef79 <Wes McKinney> Get everything compiling again e23c578fd <Wes McKinney> Fix Parquet tests c73b2162f <Wes McKinney> arrow-tests all passing again, huzzah! 04d40e8e6 <Wes McKinney> Flat dictionary IPC test passing now 481f316dc <Wes McKinney> Get JSON integration tests passing again 77a43dc9f <Wes McKinney> Fix pretty_print-test f4ada6685 <Wes McKinney> array-tests compilers again 8276dce6c <Wes McKinney> libarrow compiles again 8ea0e260a <Wes McKinney> Refactor IPC read path for new paradigm a1afe879a <Wes McKinney> More refactoring to have correct logic in IPC paths, not yet done aed04304e <Wes McKinney> More refactoring, regularize some type names 6bd72f946 <Wes McKinney> Start porting changes 24f99f16b <Wes McKinney> Initial boilerplate
2019-05-17 11:40:55 -05:00
c_result.reset(new CDictionaryArray(c_type, _indices.sp_array,
_dictionary.sp_array))
cdef Array result = pyarrow_wrap_array(c_result)
result.validate()
return result
cdef class StructArray(Array):
"""
Concrete class for Arrow arrays of a struct data type.
"""
def field(self, index):
"""
Retrieves the child array belonging to field.
Parameters
----------
index : Union[int, str]
Index / position or name of the field.
Returns
-------
result : Array
"""
cdef:
CStructArray* arr = <CStructArray*> self.ap
shared_ptr[CArray] child
if isinstance(index, (bytes, str)):
child = arr.GetFieldByName(tobytes(index))
if child == nullptr:
raise KeyError(index)
elif isinstance(index, int):
child = arr.field(
<int>_normalize_index(index, self.ap.num_fields()))
else:
raise TypeError('Expected integer or string index')
return pyarrow_wrap_array(child)
def _flattened_field(self, index, MemoryPool memory_pool=None):
"""
Retrieves the child array belonging to field,
accounting for the parent array null bitmap.
Parameters
----------
index : Union[int, str]
Index / position or name of the field.
memory_pool : MemoryPool, default None
For memory allocations, if required, otherwise use default pool.
Returns
-------
result : Array
"""
cdef:
CStructArray* arr = <CStructArray*> self.ap
shared_ptr[CArray] child
CMemoryPool* pool = maybe_unbox_memory_pool(memory_pool)
if isinstance(index, (bytes, str)):
int_index = self.type.get_field_index(index)
if int_index < 0:
raise KeyError(index)
elif isinstance(index, int):
int_index = _normalize_index(index, self.ap.num_fields())
else:
raise TypeError('Expected integer or string index')
child = GetResultValue(arr.GetFlattenedField(int_index, pool))
return pyarrow_wrap_array(child)
def flatten(self, MemoryPool memory_pool=None):
"""
Return one individual array for each field in the struct.
Parameters
----------
memory_pool : MemoryPool, default None
For memory allocations, if required, otherwise use default pool.
Returns
-------
result : List[Array]
"""
cdef:
vector[shared_ptr[CArray]] arrays
CMemoryPool* pool = maybe_unbox_memory_pool(memory_pool)
CStructArray* sarr = <CStructArray*> self.ap
with nogil:
arrays = GetResultValue(sarr.Flatten(pool))
return [pyarrow_wrap_array(arr) for arr in arrays]
@staticmethod
def from_arrays(arrays, names=None, fields=None, mask=None,
memory_pool=None, type=None):
"""
Construct StructArray from collection of arrays representing
each field in the struct.
Either field names, field instances or a struct type must be passed.
Parameters
----------
arrays : sequence of Array
names : List[str] (optional)
Field names for each struct child.
fields : List[Field] (optional)
Field instances for each struct child.
mask : pyarrow.Array[bool] (optional)
Indicate which values are null (True) or not null (False).
memory_pool : MemoryPool (optional)
For memory allocations, if required, otherwise uses default pool.
type : pyarrow.StructType (optional)
Struct type for name and type of each child.
Returns
-------
result : StructArray
"""
cdef:
shared_ptr[CArray] c_array
shared_ptr[CBuffer] c_mask
vector[shared_ptr[CArray]] c_arrays
vector[c_string] c_names
vector[shared_ptr[CField]] c_fields
CResult[shared_ptr[CArray]] c_result
ssize_t num_arrays
ssize_t length
ssize_t i
Field py_field
DataType struct_type
if fields is not None and type is not None:
raise ValueError('Must pass either fields or type, not both')
if type is not None:
fields = []
for field in type:
fields.append(field)
if names is None and fields is None:
raise ValueError('Must pass either names or fields')
if names is not None and fields is not None:
raise ValueError('Must pass either names or fields, not both')
c_mask = c_mask_inverted_from_obj(mask, memory_pool)
arrays = [asarray(x) for x in arrays]
for arr in arrays:
c_array = pyarrow_unwrap_array(arr)
if c_array == nullptr:
raise TypeError(f"Expected Array, got {arr.__class__}")
c_arrays.push_back(c_array)
if names is not None:
for name in names:
c_names.push_back(tobytes(name))
else:
for item in fields:
if isinstance(item, tuple):
py_field = field(*item)
else:
py_field = item
c_fields.push_back(py_field.sp_field)
if (c_arrays.size() == 0 and c_names.size() == 0 and
c_fields.size() == 0):
# The C++ side doesn't allow this
if mask is None:
return array([], struct([]))
else:
return array([{}] * len(mask), struct([]), mask=mask)
if names is not None:
# XXX Cannot pass "nullptr" for a shared_ptr<T> argument:
# https://github.com/cython/cython/issues/3020
c_result = CStructArray.MakeFromFieldNames(
c_arrays, c_names, c_mask, -1, 0)
else:
c_result = CStructArray.MakeFromFields(
c_arrays, c_fields, c_mask, -1, 0)
cdef Array result = pyarrow_wrap_array(GetResultValue(c_result))
result.validate()
return result
def sort(self, order="ascending", by=None, **kwargs):
"""
Sort the StructArray
Parameters
----------
order : str, default "ascending"
Which order to sort values in.
Accepted values are "ascending", "descending".
by : str or None, default None
If to sort the array by one of its fields
or by the whole array.
**kwargs : dict, optional
Additional sorting options.
As allowed by :class:`SortOptions`
Returns
-------
result : StructArray
"""
if by is not None:
tosort, sort_keys = self._flattened_field(by), [("", order)]
else:
tosort, sort_keys = self, [(field.name, order) for field in self.type]
indices = _pc().sort_indices(
tosort, options=_pc().SortOptions(sort_keys=sort_keys, **kwargs)
)
return self.take(indices)
cdef class RunEndEncodedArray(Array):
"""
Concrete class for Arrow run-end encoded arrays.
"""
@staticmethod
def _from_arrays(type, allow_none_for_type, logical_length, run_ends, values, logical_offset):
cdef:
int64_t _logical_length
Array _run_ends
Array _values
int64_t _logical_offset
shared_ptr[CDataType] c_type
shared_ptr[CRunEndEncodedArray] ree_array
_logical_length = <int64_t>logical_length
_logical_offset = <int64_t>logical_offset
type = ensure_type(type, allow_none=allow_none_for_type)
if type is not None:
_run_ends = asarray(run_ends, type=type.run_end_type)
_values = asarray(values, type=type.value_type)
c_type = pyarrow_unwrap_data_type(type)
with nogil:
ree_array = GetResultValue(CRunEndEncodedArray.Make(
c_type, _logical_length, _run_ends.sp_array, _values.sp_array, _logical_offset))
else:
_run_ends = asarray(run_ends)
_values = asarray(values)
with nogil:
ree_array = GetResultValue(CRunEndEncodedArray.MakeFromArrays(
_logical_length, _run_ends.sp_array, _values.sp_array, _logical_offset))
cdef Array result = pyarrow_wrap_array(<shared_ptr[CArray]>ree_array)
result.validate(full=True)
return result
@staticmethod
def from_arrays(run_ends, values, type=None):
"""
Construct RunEndEncodedArray from run_ends and values arrays.
Parameters
----------
run_ends : Array (int16, int32, or int64 type)
The run_ends array.
values : Array (any type)
The values array.
type : pyarrow.DataType, optional
The run_end_encoded(run_end_type, value_type) array type.
Returns
-------
RunEndEncodedArray
"""
logical_length = scalar(run_ends[-1]).as_py() if len(run_ends) > 0 else 0
return RunEndEncodedArray._from_arrays(type, True, logical_length,
run_ends, values, 0)
@staticmethod
def from_buffers(DataType type, length, buffers, null_count=-1, offset=0,
children=None):
"""
Construct a RunEndEncodedArray from all the parameters that make up an
Array.
RunEndEncodedArrays do not have buffers, only children arrays, but this
implementation is needed to satisfy the Array interface.
Parameters
----------
type : DataType
The run_end_encoded(run_end_type, value_type) type.
length : int
The logical length of the run-end encoded array. Expected to match
the last value of the run_ends array (children[0]) minus the offset.
buffers : List[Buffer]
Empty List or [None].
null_count : int, default -1
The number of null entries in the array. Run-end encoded arrays
are specified to not have valid bits and null_count always equals 0.
offset : int, default 0
The array's logical offset (in values, not in bytes) from the
start of each buffer.
children : List[Array]
Nested type children containing the run_ends and values arrays.
Returns
-------
RunEndEncodedArray
"""
children = children or []
if type.num_fields != len(children):
raise ValueError("RunEndEncodedType's expected number of children "
f"({type.num_fields}) did not match the passed number "
f"({len(children)})")
# buffers are validated as if we needed to pass them to C++, but
# _make_from_arrays will take care of filling in the expected
# buffers array containing a single NULL buffer on the C++ side
if len(buffers) == 0:
buffers = [None]
if buffers[0] is not None:
raise ValueError("RunEndEncodedType expects None as validity "
"bitmap, buffers[0] is not None")
if type.num_buffers != len(buffers):
raise ValueError("RunEndEncodedType's expected number of buffers "
f"({type.num_buffers}) did not match the passed number "
f"({len(buffers)}).")
# null_count is also validated as if we needed it
if null_count != -1 and null_count != 0:
raise ValueError("RunEndEncodedType's expected null_count (0) "
f"did not match passed number ({null_count})")
return RunEndEncodedArray._from_arrays(type, False, length, children[0],
children[1], offset)
@property
def run_ends(self):
"""
An array holding the logical indexes of each run-end.
The physical offset to the array is applied.
"""
cdef CRunEndEncodedArray* ree_array = <CRunEndEncodedArray*>(self.ap)
return pyarrow_wrap_array(ree_array.run_ends())
@property
def values(self):
"""
An array holding the values of each run.
The physical offset to the array is applied.
"""
cdef CRunEndEncodedArray* ree_array = <CRunEndEncodedArray*>(self.ap)
return pyarrow_wrap_array(ree_array.values())
def find_physical_offset(self):
"""
Find the physical offset of this REE array.
This is the offset of the run that contains the value of the first
logical element of this array considering its offset.
This function uses binary-search, so it has a O(log N) cost.
"""
cdef CRunEndEncodedArray* ree_array = <CRunEndEncodedArray*>(self.ap)
return ree_array.FindPhysicalOffset()
def find_physical_length(self):
"""
Find the physical length of this REE array.
The physical length of an REE is the number of physical values (and
run-ends) necessary to represent the logical range of values from offset
to length.
This function uses binary-search, so it has a O(log N) cost.
"""
cdef CRunEndEncodedArray* ree_array = <CRunEndEncodedArray*>(self.ap)
return ree_array.FindPhysicalLength()
cdef class ExtensionArray(Array):
"""
Concrete class for Arrow extension arrays.
"""
@property
def storage(self):
cdef:
CExtensionArray* ext_array = <CExtensionArray*>(self.ap)
return pyarrow_wrap_array(ext_array.storage())
@staticmethod
def from_storage(BaseExtensionType typ, Array storage):
"""
Construct ExtensionArray from type and storage array.
Parameters
----------
typ : DataType
The extension type for the result array.
storage : Array
The underlying storage for the result array.
Returns
-------
ext_array : ExtensionArray
"""
cdef:
shared_ptr[CExtensionArray] ext_array
if storage.type != typ.storage_type:
raise TypeError(f"Incompatible storage type {storage.type} "
f"for extension type {typ}")
ext_array = make_shared[CExtensionArray](typ.sp_type, storage.sp_array)
cdef Array result = pyarrow_wrap_array(<shared_ptr[CArray]> ext_array)
result.validate()
return result
class JsonArray(ExtensionArray):
"""
Concrete class for Arrow arrays of JSON data type.
This does not guarantee that the JSON data actually
is valid JSON.
Examples
--------
Define the extension type for JSON array
>>> import pyarrow as pa
>>> json_type = pa.json_(pa.large_utf8())
Create an extension array
>>> arr = [None, '{ "id":30, "values":["a", "b"] }']
>>> storage = pa.array(arr, pa.large_utf8())
>>> pa.ExtensionArray.from_storage(json_type, storage)
<pyarrow.lib.JsonArray object at ...>
[
null,
"{ "id":30, "values":["a", "b"] }"
]
"""
class UuidArray(ExtensionArray):
"""
Concrete class for Arrow arrays of UUID data type.
"""
cdef class FixedShapeTensorArray(ExtensionArray):
"""
Concrete class for fixed shape tensor extension arrays.
Examples
--------
Define the extension type for tensor array
>>> import pyarrow as pa
>>> tensor_type = pa.fixed_shape_tensor(pa.int32(), [2, 2])
Create an extension array
>>> arr = [[1, 2, 3, 4], [10, 20, 30, 40], [100, 200, 300, 400]]
>>> storage = pa.array(arr, pa.list_(pa.int32(), 4))
>>> pa.ExtensionArray.from_storage(tensor_type, storage)
<pyarrow.lib.FixedShapeTensorArray object at ...>
[
[
1,
2,
3,
4
],
[
10,
20,
30,
40
],
[
100,
200,
300,
400
]
]
"""
def to_numpy_ndarray(self):
"""
Convert fixed shape tensor extension array to a multi-dimensional numpy.ndarray.
The resulting ndarray will have (ndim + 1) dimensions.
The size of the first dimension will be the length of the fixed shape tensor array
and the rest of the dimensions will match the permuted shape of the fixed
shape tensor.
The conversion is zero-copy.
Returns
-------
numpy.ndarray
Ndarray representing tensors in the fixed shape tensor array concatenated
along the first dimension.
"""
return self.to_tensor().to_numpy()
def to_tensor(self):
"""
Convert fixed shape tensor extension array to a pyarrow.Tensor.
The resulting Tensor will have (ndim + 1) dimensions.
The size of the first dimension will be the length of the fixed shape tensor array
and the rest of the dimensions will match the permuted shape of the fixed
shape tensor.
The conversion is zero-copy.
Returns
-------
pyarrow.Tensor
Tensor representing tensors in the fixed shape tensor array concatenated
along the first dimension.
"""
cdef:
CFixedShapeTensorArray* ext_array = <CFixedShapeTensorArray*>(self.ap)
CResult[shared_ptr[CTensor]] ctensor
with nogil:
ctensor = ext_array.ToTensor()
return pyarrow_wrap_tensor(GetResultValue(ctensor))
@staticmethod
def from_numpy_ndarray(obj, dim_names=None):
"""
Convert numpy tensors (ndarrays) to a fixed shape tensor extension array.
The first dimension of ndarray will become the length of the fixed
shape tensor array.
If input array data is not contiguous a copy will be made.
Parameters
----------
obj : numpy.ndarray
dim_names : tuple or list of strings, default None
Explicit names to tensor dimensions.
Examples
--------
>>> import pyarrow as pa
>>> import numpy as np
>>> arr = np.array(
... [[[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [4, 5, 6]]],
... dtype=np.float32)
>>> pa.FixedShapeTensorArray.from_numpy_ndarray(arr)
<pyarrow.lib.FixedShapeTensorArray object at ...>
[
[
1,
2,
3,
4,
5,
6
],
[
1,
2,
3,
4,
5,
6
]
]
"""
if len(obj.shape) < 2:
raise ValueError(
"Cannot convert 1D array or scalar to fixed shape tensor array")
if np.prod(obj.shape) == 0:
raise ValueError("Expected a non-empty ndarray")
if dim_names is not None:
if not isinstance(dim_names, Sequence):
raise TypeError("dim_names must be a tuple or list")
if len(dim_names) != len(obj.shape[1:]):
raise ValueError(
(f"The length of dim_names ({len(dim_names)}) does not match"
f"the number of tensor dimensions ({len(obj.shape[1:])})."
)
)
if not all(isinstance(name, str) for name in dim_names):
raise TypeError("Each element of dim_names must be a string")
permutation = (-np.array(obj.strides)).argsort(kind='stable')
if permutation[0] != 0:
raise ValueError('First stride needs to be largest to ensure that '
'individual tensor data is contiguous in memory.')
arrow_type = from_numpy_dtype(obj.dtype)
shape = np.take(obj.shape, permutation)
values = np.ravel(obj, order="K")
return ExtensionArray.from_storage(
fixed_shape_tensor(arrow_type, shape[1:],
dim_names=dim_names,
permutation=permutation[1:] - 1),
FixedSizeListArray.from_arrays(values, shape[1:].prod())
)
cdef class OpaqueArray(ExtensionArray):
"""
Concrete class for opaque extension arrays.
Examples
--------
Define the extension type for an opaque array
>>> import pyarrow as pa
>>> opaque_type = pa.opaque(
... pa.binary(),
... type_name="geometry",
... vendor_name="postgis",
... )
Create an extension array
>>> arr = [None, b"data"]
>>> storage = pa.array(arr, pa.binary())
>>> pa.ExtensionArray.from_storage(opaque_type, storage)
<pyarrow.lib.OpaqueArray object at ...>
[
null,
64617461
]
"""
cdef class Bool8Array(ExtensionArray):
"""
Concrete class for bool8 extension arrays.
Examples
--------
Define the extension type for an bool8 array
>>> import pyarrow as pa
>>> bool8_type = pa.bool8()
Create an extension array
>>> arr = [-1, 0, 1, 2, None]
>>> storage = pa.array(arr, pa.int8())
>>> pa.ExtensionArray.from_storage(bool8_type, storage)
<pyarrow.lib.Bool8Array object at ...>
[
-1,
0,
1,
2,
null
]
"""
def to_numpy(self, zero_copy_only=True, writable=False):
"""
Return a NumPy bool view or copy of this array.
By default, tries to return a view of this array. This is only
supported for arrays without any nulls.
Parameters
----------
zero_copy_only : bool, default True
If True, an exception will be raised if the conversion to a numpy
array would require copying the underlying data (e.g. in presence
of nulls).
writable : bool, default False
For numpy arrays created with zero copy (view on the Arrow data),
the resulting array is not writable (Arrow data is immutable).
By setting this to True, a copy of the array is made to ensure
it is writable.
Returns
-------
array : numpy.ndarray
"""
if not writable:
try:
return self.storage.to_numpy().view(np.bool_)
except ArrowInvalid as e:
if zero_copy_only:
raise e
return _pc().not_equal(self.storage, 0).to_numpy(zero_copy_only=zero_copy_only, writable=writable)
@staticmethod
def from_storage(Int8Array storage):
"""
Construct Bool8Array from Int8Array storage.
Parameters
----------
storage : Int8Array
The underlying storage for the result array.
Returns
-------
bool8_array : Bool8Array
"""
return ExtensionArray.from_storage(bool8(), storage)
@staticmethod
def from_numpy(obj):
"""
Convert numpy array to a bool8 extension array without making a copy.
The input array must be 1-dimensional, with either ``bool_`` or ``int8`` dtype.
Parameters
----------
obj : numpy.ndarray
Returns
-------
bool8_array : Bool8Array
Examples
--------
>>> import pyarrow as pa
>>> import numpy as np
>>> arr = np.array([True, False, True], dtype=np.bool_)
>>> pa.Bool8Array.from_numpy(arr)
<pyarrow.lib.Bool8Array object at ...>
[
1,
0,
1
]
"""
if obj.ndim != 1:
raise ValueError(f"Cannot convert {obj.ndim}-D array to bool8 array")
if obj.dtype not in [np.bool_, np.int8]:
raise TypeError(f"Array dtype {obj.dtype} incompatible with bool8 storage")
storage_arr = array(obj.view(np.int8), type=int8())
return Bool8Array.from_storage(storage_arr)
cdef dict _array_classes = {
_Type_NA: NullArray,
_Type_BOOL: BooleanArray,
_Type_UINT8: UInt8Array,
_Type_UINT16: UInt16Array,
_Type_UINT32: UInt32Array,
_Type_UINT64: UInt64Array,
_Type_INT8: Int8Array,
_Type_INT16: Int16Array,
_Type_INT32: Int32Array,
_Type_INT64: Int64Array,
_Type_DATE32: Date32Array,
_Type_DATE64: Date64Array,
_Type_TIMESTAMP: TimestampArray,
_Type_TIME32: Time32Array,
_Type_TIME64: Time64Array,
_Type_DURATION: DurationArray,
_Type_INTERVAL_MONTH_DAY_NANO: MonthDayNanoIntervalArray,
_Type_HALF_FLOAT: HalfFloatArray,
_Type_FLOAT: FloatArray,
_Type_DOUBLE: DoubleArray,
_Type_LIST: ListArray,
_Type_LARGE_LIST: LargeListArray,
_Type_LIST_VIEW: ListViewArray,
_Type_LARGE_LIST_VIEW: LargeListViewArray,
ARROW-6904: [Python] Add support for MapArray This adds support for `MapArray` in Python with conversion from a Python sequence of either dictionaries with "key" and "value" fields or a tuple with 2 elements. Additionally, added the API `MapArray.from_arrays` to build a `MapArray` from individual offset, key, value arrays. Closes #5774 from BryanCutler/python-impl-MapArray-ARROW-6904 and squashes the following commits: f2935c378 <Bryan Cutler> Avoid lookup of key_builder at each value c1fa14ef5 <Bryan Cutler> Added MapArray decl to lib.pxd 6b23dc2aa <Bryan Cutler> typo 5385529ff <Bryan Cutler> Address comments, add test compare with ListBuilder of structs 9772e1df4 <Bryan Cutler> unicode repr for py2 f1a354764 <Bryan Cutler> Fix test_map error for py2 7bdfdebb6 <Bryan Cutler> Changed MapValue.as_py() to return a list of tuples, added test_scalars 3c1a7f85a <Bryan Cutler> Add MapType to schema_test 2f4c29652 <Bryan Cutler> Add Map tests to test_misc acd0e6b04 <Bryan Cutler> Add tests for python MapType 2555849c4 <Bryan Cutler> Add tests for MapArray::FromArrays 99ba44f0f <Bryan Cutler> Fix python2 test error 442dac2bc <Bryan Cutler> Fix lint issues 3a1134d01 <Bryan Cutler> Added checks in MapConverter to verify appended value, passing tests fa883cc9e <Bryan Cutler> Added tests for python to arrow conversion, need to pass verify dicts 70b453db6 <Bryan Cutler> Added test_array using from_arrays 72ab5295c <Bryan Cutler> Fix MapArray.from_arrays to work with null values 7f8770140 <Bryan Cutler> Adding map converter as ListConverter with MapType 309ac112b <Bryan Cutler> Change MapBuilder to use a StructBuilder internally 033479de8 <Bryan Cutler> Fix MapArray::SetData to use ListArray::SetData without faking type cf6a4fb72 <Bryan Cutler> Added MapType and MapArray, working in python with FromArrays Authored-by: Bryan Cutler <cutlerb@gmail.com> Signed-off-by: Antoine Pitrou <antoine@python.org>
2019-12-04 20:27:27 +01:00
_Type_MAP: MapArray,
_Type_FIXED_SIZE_LIST: FixedSizeListArray,
_Type_SPARSE_UNION: UnionArray,
_Type_DENSE_UNION: UnionArray,
_Type_BINARY: BinaryArray,
_Type_STRING: StringArray,
_Type_LARGE_BINARY: LargeBinaryArray,
_Type_LARGE_STRING: LargeStringArray,
_Type_BINARY_VIEW: BinaryViewArray,
_Type_STRING_VIEW: StringViewArray,
_Type_DICTIONARY: DictionaryArray,
_Type_FIXED_SIZE_BINARY: FixedSizeBinaryArray,
_Type_DECIMAL32: Decimal32Array,
_Type_DECIMAL64: Decimal64Array,
_Type_DECIMAL128: Decimal128Array,
_Type_DECIMAL256: Decimal256Array,
_Type_STRUCT: StructArray,
_Type_RUN_END_ENCODED: RunEndEncodedArray,
_Type_EXTENSION: ExtensionArray,
}
cdef inline shared_ptr[CBuffer] c_mask_inverted_from_obj(object mask, MemoryPool pool) except *:
"""
Convert mask array obj to c_mask while also inverting to signify 1 for valid and 0 for null
"""
cdef shared_ptr[CBuffer] c_mask
if mask is None:
c_mask = shared_ptr[CBuffer]()
elif isinstance(mask, Array):
if mask.type.id != Type_BOOL:
raise TypeError('Mask must be a pyarrow.Array of type boolean')
if mask.null_count != 0:
raise ValueError('Mask must not contain nulls')
inverted_mask = _pc().invert(mask, memory_pool=pool)
c_mask = pyarrow_unwrap_buffer(inverted_mask.buffers()[1])
else:
raise TypeError('Mask must be a pyarrow.Array of type boolean')
return c_mask
cdef object get_array_class_from_type(
const shared_ptr[CDataType]& sp_data_type):
cdef CDataType* data_type = sp_data_type.get()
if data_type == NULL:
raise ValueError('Array data type was NULL')
if data_type.id() == _Type_EXTENSION:
py_ext_data_type = pyarrow_wrap_data_type(sp_data_type)
return py_ext_data_type.__arrow_ext_class__()
else:
return _array_classes[data_type.id()]
cdef object get_values(object obj, bint* is_series):
if pandas_api.is_series(obj) or pandas_api.is_index(obj):
result = pandas_api.get_values(obj)
ARROW-4324: [Python] Triage broken type inference logic in presence of a mix of NumPy dtype-having objects and other scalar values In investigating the innocuous bug report from ARROW-4324 I stumbled on a pile of hacks and flawed design around type inference ``` test_list = [np.dtype('int32').type(10), np.dtype('float32').type(0.5)] test_array = pa.array(test_list) # Expected # test_array # <pyarrow.lib.DoubleArray object at 0x7f009963bf48> # [ # 10, # 0.5 # ] # Got # test_array # <pyarrow.lib.Int32Array object at 0x7f009963bf48> # [ # 10, # 0 # ] ``` It turns out there are several issues: * There was a kludge around handling the `numpy.nan` value which is a PyFloat, not a NumPy float64 scalar * Type inference assumed "NaN is null", which should not be hard coded, so I added a flag to switch between pandas semantics and non-pandas * Mixing NumPy scalar values and non-NumPy scalars (like our evil friend numpy.nan) caused the output type to be simply incorrect. For example `[np.float16(1.5), 2.5]` would yield `pa.float16()` output type. Yuck In inserted some hacks to force what I believe to be the correct behavior and fixed a couple unit tests that actually exhibited buggy behavior before (see within). I don't have time to do the "right thing" right now which is to more or less rewrite the hot path of `arrow/python/inference.cc`, so at least this gets the unit tests asserting what is correct so that refactoring will be more productive later. Author: Wes McKinney <wesm+git@apache.org> Closes #4527 from wesm/ARROW-4324 and squashes the following commits: e396958b0 <Wes McKinney> Add unit test for passing pandas Series with from_pandas=False 754468a5d <Wes McKinney> Set from_pandas to None by default in pyarrow.array so that user wishes can be respected e1b839339 <Wes McKinney> Remove outdated unit test, add Python unit test that shows behavior from ARROW-2240 that's been changed 4bc8c8193 <Wes McKinney> Triage type inference logic in presence of a mix of NumPy dtype-having objects and other typed values, pending more serious refactor in ARROW-5564
2019-06-12 17:14:40 -05:00
is_series[0] = True
elif isinstance(obj, np.ndarray):
result = obj
ARROW-4324: [Python] Triage broken type inference logic in presence of a mix of NumPy dtype-having objects and other scalar values In investigating the innocuous bug report from ARROW-4324 I stumbled on a pile of hacks and flawed design around type inference ``` test_list = [np.dtype('int32').type(10), np.dtype('float32').type(0.5)] test_array = pa.array(test_list) # Expected # test_array # <pyarrow.lib.DoubleArray object at 0x7f009963bf48> # [ # 10, # 0.5 # ] # Got # test_array # <pyarrow.lib.Int32Array object at 0x7f009963bf48> # [ # 10, # 0 # ] ``` It turns out there are several issues: * There was a kludge around handling the `numpy.nan` value which is a PyFloat, not a NumPy float64 scalar * Type inference assumed "NaN is null", which should not be hard coded, so I added a flag to switch between pandas semantics and non-pandas * Mixing NumPy scalar values and non-NumPy scalars (like our evil friend numpy.nan) caused the output type to be simply incorrect. For example `[np.float16(1.5), 2.5]` would yield `pa.float16()` output type. Yuck In inserted some hacks to force what I believe to be the correct behavior and fixed a couple unit tests that actually exhibited buggy behavior before (see within). I don't have time to do the "right thing" right now which is to more or less rewrite the hot path of `arrow/python/inference.cc`, so at least this gets the unit tests asserting what is correct so that refactoring will be more productive later. Author: Wes McKinney <wesm+git@apache.org> Closes #4527 from wesm/ARROW-4324 and squashes the following commits: e396958b0 <Wes McKinney> Add unit test for passing pandas Series with from_pandas=False 754468a5d <Wes McKinney> Set from_pandas to None by default in pyarrow.array so that user wishes can be respected e1b839339 <Wes McKinney> Remove outdated unit test, add Python unit test that shows behavior from ARROW-2240 that's been changed 4bc8c8193 <Wes McKinney> Triage type inference logic in presence of a mix of NumPy dtype-having objects and other typed values, pending more serious refactor in ARROW-5564
2019-06-12 17:14:40 -05:00
is_series[0] = False
else:
result = pandas_api.series(obj, copy=False).values
ARROW-4324: [Python] Triage broken type inference logic in presence of a mix of NumPy dtype-having objects and other scalar values In investigating the innocuous bug report from ARROW-4324 I stumbled on a pile of hacks and flawed design around type inference ``` test_list = [np.dtype('int32').type(10), np.dtype('float32').type(0.5)] test_array = pa.array(test_list) # Expected # test_array # <pyarrow.lib.DoubleArray object at 0x7f009963bf48> # [ # 10, # 0.5 # ] # Got # test_array # <pyarrow.lib.Int32Array object at 0x7f009963bf48> # [ # 10, # 0 # ] ``` It turns out there are several issues: * There was a kludge around handling the `numpy.nan` value which is a PyFloat, not a NumPy float64 scalar * Type inference assumed "NaN is null", which should not be hard coded, so I added a flag to switch between pandas semantics and non-pandas * Mixing NumPy scalar values and non-NumPy scalars (like our evil friend numpy.nan) caused the output type to be simply incorrect. For example `[np.float16(1.5), 2.5]` would yield `pa.float16()` output type. Yuck In inserted some hacks to force what I believe to be the correct behavior and fixed a couple unit tests that actually exhibited buggy behavior before (see within). I don't have time to do the "right thing" right now which is to more or less rewrite the hot path of `arrow/python/inference.cc`, so at least this gets the unit tests asserting what is correct so that refactoring will be more productive later. Author: Wes McKinney <wesm+git@apache.org> Closes #4527 from wesm/ARROW-4324 and squashes the following commits: e396958b0 <Wes McKinney> Add unit test for passing pandas Series with from_pandas=False 754468a5d <Wes McKinney> Set from_pandas to None by default in pyarrow.array so that user wishes can be respected e1b839339 <Wes McKinney> Remove outdated unit test, add Python unit test that shows behavior from ARROW-2240 that's been changed 4bc8c8193 <Wes McKinney> Triage type inference logic in presence of a mix of NumPy dtype-having objects and other typed values, pending more serious refactor in ARROW-5564
2019-06-12 17:14:40 -05:00
is_series[0] = False
return result
def concat_arrays(arrays, MemoryPool memory_pool=None):
"""
Concatenate the given arrays.
The contents of the input arrays are copied into the returned array.
Raises
------
ArrowInvalid
If not all of the arrays have the same type.
Parameters
----------
arrays : iterable of pyarrow.Array
Arrays to concatenate, must be identically typed.
memory_pool : MemoryPool, default None
For memory allocations. If None, the default pool is used.
Examples
--------
>>> import pyarrow as pa
>>> arr1 = pa.array([2, 4, 5, 100])
>>> arr2 = pa.array([2, 4])
>>> pa.concat_arrays([arr1, arr2])
<pyarrow.lib.Int64Array object at ...>
[
2,
4,
5,
100,
2,
4
]
"""
cdef:
vector[shared_ptr[CArray]] c_arrays
shared_ptr[CArray] c_concatenated
CMemoryPool* pool = maybe_unbox_memory_pool(memory_pool)
for array in arrays:
if not isinstance(array, Array):
raise TypeError("Iterable should contain Array objects, "
f"got {type(array)} instead")
c_arrays.push_back(pyarrow_unwrap_array(array))
with nogil:
c_concatenated = GetResultValue(Concatenate(c_arrays, pool))
return pyarrow_wrap_array(c_concatenated)
def _empty_array(DataType type):
"""
Create empty array of the given type.
"""
if type.id == Type_DICTIONARY:
arr = DictionaryArray.from_arrays(
_empty_array(type.index_type), _empty_array(type.value_type),
ordered=type.ordered)
else:
arr = array([], type=type)
return arr