[ VIGRA Homepage | Function Index | Class Index | Namespaces | File List | Main Page ]

multi_array_chunked.hxx
1/************************************************************************/
2/* */
3/* Copyright 2012-2014 by Ullrich Koethe and Thorben Kroeger */
4/* */
5/* This file is part of the VIGRA computer vision library. */
6/* The VIGRA Website is */
7/* http://hci.iwr.uni-heidelberg.de/vigra/ */
8/* Please direct questions, bug reports, and contributions to */
9/* ullrich.koethe@iwr.uni-heidelberg.de or */
10/* vigra@informatik.uni-hamburg.de */
11/* */
12/* Permission is hereby granted, free of charge, to any person */
13/* obtaining a copy of this software and associated documentation */
14/* files (the "Software"), to deal in the Software without */
15/* restriction, including without limitation the rights to use, */
16/* copy, modify, merge, publish, distribute, sublicense, and/or */
17/* sell copies of the Software, and to permit persons to whom the */
18/* Software is furnished to do so, subject to the following */
19/* conditions: */
20/* */
21/* The above copyright notice and this permission notice shall be */
22/* included in all copies or substantial portions of the */
23/* Software. */
24/* */
25/* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND */
26/* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES */
27/* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND */
28/* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT */
29/* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, */
30/* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING */
31/* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR */
32/* OTHER DEALINGS IN THE SOFTWARE. */
33/* */
34/************************************************************************/
35
36/* benchmark results for a simple loop 'if(iter.get<1>() != count++)'
37
38 ********************
39 image size: 200^3, chunk size: 64^3, i.e. chunk count: 4^3
40 times in msec, excluding time to store file on disk
41
42 win64/vs2012 (koethe-laptop): uint8_t float double
43 plain array 18 18 18
44 chunked array (all in cache) 25 26 26
45 thread-safe chunked (all in cache) 27 28 29
46 thread-safe chunked (1 slice in cache) 29 33 39
47 thread-safe chunked (1 row in cache) 45 48 52
48 chunked (initial creation, all in cache) 33 43 57
49
50 linux/gcc 4.7.3 (birdofprey): uint8_t float double
51 plain array 16 20 21
52 chunked array (all in cache) 17 23 24
53 thread-safe chunked (all in cache) 19 24 25
54 thread-safe chunked (1 slice in cache) 20 29 34
55 thread-safe chunked (1 row in cache) 24 33 39
56 chunked (initial creation, all in cache) 22 34 48
57
58 OS X 10.7: uint8_t float double
59 plain array 11 22 24
60 chunked array (all in cache) -- -- --
61 thread-safe chunked (all in cache) 20 25 26
62 thread-safe chunked (1 slice in cache) 23 37 46
63 thread-safe chunked (1 row in cache) 32 50 56
64 chunked (initial creation, all in cache) 34 56 77
65 (These numbers refer to nested loop iteration. Scan-order iteration
66 is unfortunately 3.5 times slower on the Mac. On the other hand,
67 two-level indexing as faster on a Mac than on Linux and Windows --
68 the speed penalty is only a factor of 2 rather than 3.)
69
70 **********************
71 image size: 400^3, chunk size: 127^3, i.e. chunk count: 4^3
72 times in msec, excluding time to store file on disk
73
74 win64/vs2012 (koethe-laptop): uint8_t float double
75 plain array 130 130 130
76 chunked array (all in cache) 190 190 200
77 thread-safe chunked (all in cache) 190 200 210
78 thread-safe chunked (1 slice in cache) 210 235 280
79 thread-safe chunked (1 row in cache) 240 270 300
80 chunked (initial creation, all in cache) 230 300 400
81
82 linux/gcc 4.7.3 (birdofprey): uint8_t float double
83 plain array 130 162 165
84 chunked array (all in cache) 131 180 184
85 thread-safe chunked (all in cache) 135 183 188
86 thread-safe chunked (1 slice in cache) 146 218 258
87 thread-safe chunked (1 row in cache) 154 229 270
88 chunked (initial creation, all in cache) 173 269 372
89
90 ***********************
91 Compression:
92 * I tried ZLIB, LZO, SNAPPY, LZ4, LZFX and FASTLZ (with compression levels 1 -- faster
93 and level 2 -- higher compression). There are also QuickLZ and LZMAT which claim
94 to be fast, but they are under a GPL license.
95 * ZLIB compresses best, but is quite slow even at compression level 1
96 (times are in ms and include compression and decompression).
97 byte float double
98 ZLIB 121 3100 5800
99 ZLIB1 79 1110 1190
100 LZO 43 104 280
101 SNAPPY 46 71 305
102 LZ4 42 70 283
103 LZFX 76 278 330
104 FASTLZ1 52 280 309
105 FASTLZ1 53 286 339
106 * The fast compression algorithms are unable to compress the float array
107 and achieve ~50% for the double array, whereas ZLIB achieves 32% and 16%
108 respectively (at the fastest compression level 1, it is still 33% and 17%
109 respectively). LZFX cannot even compress the byte data (probably a bug?).
110 Average compression ratios for the byte array are
111 ZLIB: 2.3%
112 ZLIB1: 4.6%
113 LZO: 5.6%
114 SNAPPY: 9.8%
115 LZ4: 9.7%
116 FASTLZ1: 7.6%
117 FASTLZ2: 7.9%
118 * LZO is under GPL (but there is a Java implementation under Apache license at
119 http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.19.2/src/core/org/apache/hadoop/io/compress/lzo/)
120 The others are BSD and MIT (FASTLZ).
121 * Snappy doesn't support Windows natively, but porting is simple (see my github repo)
122 * The source code for LZO, LZ4, LZFX, and FASTLZ can simply be copied to VIGRA,
123 but LZO's GPL license is unsuitable.
124 * HDF5 compression is already sufficient at level 1 (4-15%,
125 higher levels don't lead to big gains) and only a factor 3-10 slower
126 than without compression.
127*/
128
129#ifndef VIGRA_MULTI_ARRAY_CHUNKED_HXX
130#define VIGRA_MULTI_ARRAY_CHUNKED_HXX
131
132#include <queue>
133#include <string>
134
135#include "multi_fwd.hxx"
136#include "multi_handle.hxx"
137#include "multi_array.hxx"
138#include "memory.hxx"
139#include "metaprogramming.hxx"
140#include "threading.hxx"
141#include "compression.hxx"
142
143#ifdef _WIN32
144# include "vigra/windows.h"
145#else
146# include <fcntl.h>
147# include <stdlib.h>
148# include <unistd.h>
149# include <sys/stat.h>
150# include <sys/mman.h>
151# include <cstdio>
152#endif
153
154// Bounds checking Macro used if VIGRA_CHECK_BOUNDS is defined.
155#ifdef VIGRA_CHECK_BOUNDS
156#define VIGRA_ASSERT_INSIDE(diff) \
157 vigra_precondition(this->isInside(diff), "Index out of bounds")
158#else
159#define VIGRA_ASSERT_INSIDE(diff)
160#endif
161
162namespace vigra {
163
164#ifdef __APPLE__
165 #define VIGRA_NO_SPARSE_FILE
166#endif
167
168#ifdef _WIN32
169
170inline
171void winErrorToException(std::string message = "")
172{
173 LPVOID lpMsgBuf;
174 DWORD dw = GetLastError();
175
176 FormatMessage(
177 FORMAT_MESSAGE_ALLOCATE_BUFFER |
178 FORMAT_MESSAGE_FROM_SYSTEM |
179 FORMAT_MESSAGE_IGNORE_INSERTS,
180 NULL,
181 dw,
182 MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT),
183 (LPTSTR) &lpMsgBuf,
184 0, NULL );
185
186 message += (char*)lpMsgBuf;
187 LocalFree(lpMsgBuf);
188
189 throw std::runtime_error(message);
190}
191
192inline
193std::string winTempFileName(std::string path = "")
194{
195 if(path == "")
196 {
197 TCHAR default_path[MAX_PATH];
198 if(!GetTempPath(MAX_PATH, default_path))
199 winErrorToException("winTempFileName(): ");
200 path = default_path;
201 }
202
203 TCHAR name[MAX_PATH];
204 if(!GetTempFileName(path.c_str(), TEXT("vigra"), 0, name))
205 winErrorToException("winTempFileName(): ");
206
207 return std::string(name);
208}
209
210inline
211std::size_t winClusterSize()
212{
213 SYSTEM_INFO info;
214 ::GetSystemInfo(&info);
215 return info.dwAllocationGranularity;
216}
217
218#endif
219
220namespace {
221
222#ifdef _WIN32
223std::size_t mmap_alignment = winClusterSize();
224#else
225std::size_t mmap_alignment = sysconf(_SC_PAGE_SIZE);
226#endif
227
228} // anonymous namespace
229
230template <unsigned int N, class T>
231class IteratorChunkHandle;
232
233namespace detail {
234
235template <unsigned int N>
236struct ChunkIndexing
237{
238 template <class T, int M>
239 static void chunkIndex(TinyVector<T, M> const & p,
240 TinyVector<T, M> const & bits,
241 TinyVector<T, M> & index)
242 {
243 typedef std::size_t UI;
244 ChunkIndexing<N-1>::chunkIndex(p, bits, index);
245 index[N-1] = (UI)p[N-1] >> bits[N-1];
246 }
247
248 template <class T, int M>
249 static std::size_t chunkOffset(TinyVector<T, M> const & p,
250 TinyVector<T, M> const & bits,
251 TinyVector<T, M> const & strides)
252 {
253 typedef std::size_t UI;
254 return ChunkIndexing<N-1>::chunkOffset(p, bits, strides) +
255 ((UI)p[N-1] >> bits[N-1]) * strides[N-1];
256 }
257
258 template <class T, int M>
259 static std::size_t offsetInChunk(TinyVector<T, M> const & p,
260 TinyVector<T, M> const & mask,
261 TinyVector<T, M> const & strides)
262 {
263 typedef std::size_t UI;
264 return ChunkIndexing<N-1>::offsetInChunk(p, mask, strides) +
265 ((UI)p[N-1] & (UI)mask[N-1]) * strides[N-1];
266 }
267};
268
269template <>
270struct ChunkIndexing<1>
271{
272 template <class T, int M>
273 static void chunkIndex(TinyVector<T, M> const & p,
274 TinyVector<T, M> const & bits,
275 TinyVector<T, M> & index)
276 {
277 typedef std::size_t UI;
278 index[0] = (UI)p[0] >> bits[0];
279 }
280
281 template <class T, int M>
282 static std::size_t chunkOffset(TinyVector<T, M> const & p,
283 TinyVector<T, M> const & bits,
284 TinyVector<T, M> const & strides)
285 {
286 typedef std::size_t UI;
287 return ((UI)p[0] >> bits[0]) * strides[0];
288 }
289
290 template <class T, int M>
291 static std::size_t offsetInChunk(TinyVector<T, M> const & p,
292 TinyVector<T, M> const & mask,
293 TinyVector<T, M> const & strides)
294 {
295 typedef std::size_t UI;
296 return ((UI)p[0] & (UI)mask[0]) * strides[0];
297 }
298};
299
300template <class T, int M>
301inline TinyVector<T, M>
302computeChunkArrayShape(TinyVector<T, M> shape,
303 TinyVector<T, M> const & bits,
304 TinyVector<T, M> const & mask)
305{
306 for(int k=0; k<M; ++k)
307 shape[k] = (shape[k] + mask[k]) >> bits[k];
308 return shape;
309}
310
311template <class T, int M>
312inline T
313defaultCacheSize(TinyVector<T, M> const & shape)
314{
315 T res = max(shape);
316 for(int k=0; k<M-1; ++k)
317 for(int j=k+1; j<M; ++j)
318 res = std::max(res, shape[k]*shape[j]);
319 return res + 1;
320}
321
322} // namespace detail
323
324template <unsigned int N, class T>
325class ChunkBase
326{
327 public:
328 typedef typename MultiArrayShape<N>::type shape_type;
329 typedef T value_type;
330 typedef T* pointer;
331
332 ChunkBase()
333 : strides_()
334 , pointer_()
335 {}
336
337 ChunkBase(shape_type const & strides, pointer p = 0)
338 : strides_(strides)
339 , pointer_(p)
340 {}
341
342 typename MultiArrayShape<N>::type strides_;
343 T * pointer_;
344};
345
346template <unsigned int N, class T>
347class SharedChunkHandle
348{
349 public:
350 typedef typename MultiArrayShape<N>::type shape_type;
351
352 static const long chunk_asleep = -2;
353 static const long chunk_uninitialized = -3;
354 static const long chunk_locked = -4;
355 static const long chunk_failed = -5;
356
357 SharedChunkHandle()
358 : pointer_(0)
359 , chunk_state_()
360 {
361 chunk_state_ = chunk_uninitialized;
362 }
363
364 SharedChunkHandle(SharedChunkHandle const & rhs)
365 : pointer_(rhs.pointer_)
366 , chunk_state_()
367 {
368 chunk_state_ = chunk_uninitialized;
369 }
370
371 shape_type const & strides() const
372 {
373 return pointer_->strides_;
374 }
375
376 ChunkBase<N, T> * pointer_;
377 mutable threading::atomic_long chunk_state_;
378
379 private:
380 SharedChunkHandle & operator=(SharedChunkHandle const & rhs);
381};
382
383template <unsigned int N, class T>
384class ChunkedArrayBase
385{
386 public:
387 enum ActualDimension{ actual_dimension = (N == 0) ? 1 : N };
388 typedef typename MultiArrayShape<N>::type shape_type;
389 typedef T value_type;
390 typedef value_type * pointer;
391 typedef value_type & reference;
392 typedef ChunkBase<N, T> Chunk;
393
394 ChunkedArrayBase()
395 : shape_()
396 , chunk_shape_()
397 {}
398
399 ChunkedArrayBase(shape_type const & shape, shape_type const & chunk_shape)
400 : shape_(shape)
401 , chunk_shape_(prod(chunk_shape) > 0 ? chunk_shape : detail::ChunkShape<N, T>::defaultShape())
402 {}
403
404 virtual ~ChunkedArrayBase()
405 {}
406
407 virtual void unrefChunk(IteratorChunkHandle<N, T> * h) const = 0;
408
409 virtual pointer chunkForIterator(shape_type const & point,
410 shape_type & strides, shape_type & upper_bound,
411 IteratorChunkHandle<N, T> * h) = 0;
412
413 virtual pointer chunkForIterator(shape_type const & point,
414 shape_type & strides, shape_type & upper_bound,
415 IteratorChunkHandle<N, T> * h) const = 0;
416
417 virtual std::string backend() const = 0;
418
419 virtual shape_type chunkArrayShape() const = 0;
420
421 virtual bool isReadOnly() const
422 {
423 return false;
424 }
425
426 MultiArrayIndex size() const
427 {
428 return prod(shape_);
429 }
430
431 shape_type const & shape() const
432 {
433 return shape_;
434 }
435
436 MultiArrayIndex shape(MultiArrayIndex d) const
437 {
438 return shape_[d];
439 }
440
441 shape_type const & chunkShape() const
442 {
443 return chunk_shape_;
444 }
445
446 MultiArrayIndex chunkShape(MultiArrayIndex d) const
447 {
448 return chunk_shape_[d];
449 }
450
451 bool isInside(shape_type const & p) const
452 {
453 for(unsigned d=0; d<N; ++d)
454 if(p[d] < 0 || p[d] >= shape_[d])
455 return false;
456 return true;
457 }
458
459 shape_type shape_, chunk_shape_;
460};
461
462template <unsigned int N, class T>
463class ChunkedArray;
464
465struct ChunkUnrefProxyBase
466{
467 virtual ~ChunkUnrefProxyBase() {}
468};
469
470template <unsigned int N, class T_MaybeConst>
471class MultiArrayView<N, T_MaybeConst, ChunkedArrayTag>
472: public ChunkedArrayBase<N, typename UnqualifiedType<T_MaybeConst>::type>
473{
474 public:
475 enum ActualDimension { actual_dimension = (N==0) ? 1 : N };
476 typedef typename UnqualifiedType<T_MaybeConst>::type T;
477 typedef T value_type; // FIXME: allow Multiband<T> ???
478 typedef T_MaybeConst & reference;
479 typedef const value_type &const_reference;
480 typedef T_MaybeConst * pointer;
481 typedef const value_type *const_pointer;
482 typedef typename MultiArrayShape<actual_dimension>::type difference_type;
483 typedef difference_type key_type;
484 typedef difference_type size_type;
485 typedef difference_type shape_type;
486 typedef MultiArrayIndex difference_type_1;
487 typedef ChunkIterator<actual_dimension, T_MaybeConst> chunk_iterator;
488 typedef ChunkIterator<actual_dimension, T const> chunk_const_iterator;
489 typedef StridedScanOrderIterator<actual_dimension, ChunkedMemory<T_MaybeConst>, T_MaybeConst&, T_MaybeConst*> iterator;
490 typedef StridedScanOrderIterator<actual_dimension, ChunkedMemory<T const>, T const &, T const *> const_iterator;
491 typedef MultiArrayView<N, T_MaybeConst, ChunkedArrayTag> view_type;
492 typedef MultiArrayView<N, T const, ChunkedArrayTag> const_view_type;
493 typedef ChunkedArrayTag StrideTag;
494 typedef ChunkBase<N, T> Chunk;
495
496 typedef MultiArray<N, Chunk> ChunkHolder;
497
498 struct UnrefProxy
499 : public ChunkUnrefProxyBase
500 {
501 UnrefProxy(int size, ChunkedArray<N, T> * array)
502 : chunks_(size)
503 , array_(array)
504 {}
505
506 ~UnrefProxy()
507 {
508 if(array_)
509 array_->unrefChunks(chunks_);
510 }
511
512 ArrayVector<SharedChunkHandle<N, T> *> chunks_;
513 ChunkedArray<N, T> * array_;
514 };
515
516 virtual shape_type chunkArrayShape() const
517 {
518 return chunks_.shape();
519 }
520
521 shape_type chunkStart(shape_type const & global_start) const
522 {
523 shape_type chunk_start(SkipInitialization);
524 detail::ChunkIndexing<N>::chunkIndex(global_start, bits_, chunk_start);
525 return chunk_start;
526 }
527
528 shape_type chunkStop(shape_type global_stop) const
529 {
530 global_stop -= shape_type(1);
531 shape_type chunk_stop(SkipInitialization);
532 detail::ChunkIndexing<N>::chunkIndex(global_stop, bits_, chunk_stop);
533 chunk_stop += shape_type(1);
534 return chunk_stop;
535 }
536
537 virtual void unrefChunk(IteratorChunkHandle<N, T> *) const {}
538
539 virtual T* chunkForIterator(shape_type const & point,
540 shape_type & strides, shape_type & upper_bound,
541 IteratorChunkHandle<N, T> * h)
542 {
543 return const_cast<MultiArrayView const *>(this)->chunkForIterator(point, strides, upper_bound, h);
544 }
545
546 virtual T* chunkForIterator(shape_type const & point,
547 shape_type & strides, shape_type & upper_bound,
548 IteratorChunkHandle<N, T> * h) const
549 {
550 shape_type global_point = point + h->offset_;
551
552 if(!this->isInside(global_point))
553 {
554 upper_bound = point + this->chunk_shape_;
555 return 0;
556 }
557
558 global_point += offset_;
559 shape_type coffset = offset_ + h->offset_;
560
561 shape_type chunkIndex = chunkStart(global_point);
562 Chunk const * chunk = &chunks_[chunkIndex];
563 strides = chunk->strides_;
564 upper_bound = (chunkIndex + shape_type(1)) * this->chunk_shape_ - coffset;
565 std::size_t offset = detail::ChunkIndexing<N>::offsetInChunk(global_point, mask_, strides);
566 return const_cast<T*>(chunk->pointer_ + offset);
567 }
568
569 virtual std::string backend() const
570 {
571 return "MultiArrayView<ChunkedArrayTag>";
572 }
573
574 MultiArrayView()
575 : ChunkedArrayBase<N, T>()
576 {}
577
578 MultiArrayView(shape_type const & shape, shape_type const & chunk_shape)
579 : ChunkedArrayBase<N, T>(shape, chunk_shape)
580 {}
581
582 MultiArrayView & operator=(MultiArrayView const & rhs)
583 {
584 if(this != &rhs)
585 {
586 if(!hasData())
587 {
588 ChunkedArrayBase<N, T>::operator=(rhs);
589 chunks_ = rhs.chunks_;
590 offset_ = rhs.offset_;
591 bits_ = rhs.bits_;
592 mask_ = rhs.mask_;
593 unref_ = rhs.unref_;
594 }
595 else
596 {
597 vigra_precondition(this->shape() == rhs.shape(),
598 "MultiArrayView::operator=(): shape mismatch.");
599 iterator i = begin(), ie = end();
600 const_iterator j = rhs.begin();
601 for(; i != ie; ++i, ++j)
602 *i = *j;
603 }
604 }
605 return *this;
606 }
607
608 #define VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN(op) \
609 template<class U, class C1> \
610 MultiArrayView & operator op(MultiArrayView<N, U, C1> const & rhs) \
611 { \
612 vigra_precondition(this->shape() == rhs.shape(), \
613 "MultiArrayView::operator" #op "(): shape mismatch."); \
614 iterator i = begin(), ie = end(); \
615 typename MultiArrayView<N, U, C1>::const_iterator j = rhs.begin(); \
616 for(; i != ie; ++i, ++j) \
617 *i op detail::RequiresExplicitCast<value_type>::cast(*j); \
618 return *this; \
619 } \
620 \
621 MultiArrayView & operator op(value_type const & v) \
622 { \
623 if(hasData()) \
624 { \
625 iterator i = begin(), ie = end(); \
626 for(; i != ie; ++i) \
627 *i op v; \
628 } \
629 return *this; \
630 }
631
632 VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN(=)
633 VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN(+=)
634 VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN(-=)
635 VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN(*=)
636 VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN(/=)
637
638 #undef VIGRA_CHUNKED_ARRAY_VIEW_ASSIGN
639
640 // template<class Expression>
641 // MultiArrayView & operator=(multi_math::MultiMathOperand<Expression> const & rhs)
642 // {
643 // multi_math::math_detail::assign(*this, rhs);
644 // return *this;
645 // }
646
647 // /** Add-assignment of an array expression. Fails with
648 // <tt>PreconditionViolation</tt> exception when the shapes do not match.
649 // */
650 // template<class Expression>
651 // MultiArrayView & operator+=(multi_math::MultiMathOperand<Expression> const & rhs)
652 // {
653 // multi_math::math_detail::plusAssign(*this, rhs);
654 // return *this;
655 // }
656
657 // /** Subtract-assignment of an array expression. Fails with
658 // <tt>PreconditionViolation</tt> exception when the shapes do not match.
659 // */
660 // template<class Expression>
661 // MultiArrayView & operator-=(multi_math::MultiMathOperand<Expression> const & rhs)
662 // {
663 // multi_math::math_detail::minusAssign(*this, rhs);
664 // return *this;
665 // }
666
667 // /** Multiply-assignment of an array expression. Fails with
668 // <tt>PreconditionViolation</tt> exception when the shapes do not match.
669 // */
670 // template<class Expression>
671 // MultiArrayView & operator*=(multi_math::MultiMathOperand<Expression> const & rhs)
672 // {
673 // multi_math::math_detail::multiplyAssign(*this, rhs);
674 // return *this;
675 // }
676
677 // /** Divide-assignment of an array expression. Fails with
678 // <tt>PreconditionViolation</tt> exception when the shapes do not match.
679 // */
680 // template<class Expression>
681 // MultiArrayView & operator/=(multi_math::MultiMathOperand<Expression> const & rhs)
682 // {
683 // multi_math::math_detail::divideAssign(*this, rhs);
684 // return *this;
685 // }
686
687 reference operator[](shape_type point)
688 {
689 VIGRA_ASSERT_INSIDE(point);
690 point += offset_;
691 Chunk * chunk = chunks_.data() +
692 detail::ChunkIndexing<N>::chunkOffset(point, bits_, chunks_.stride());
693 return *(chunk->pointer_ +
694 detail::ChunkIndexing<N>::offsetInChunk(point, mask_, chunk->strides_));
695 }
696
697 const_reference operator[](shape_type const & point) const
698 {
699 return const_cast<MultiArrayView *>(this)->operator[](point);
700 }
701
702 template <int M>
703 MultiArrayView <N-M, T, ChunkedArrayTag>
704 operator[](const TinyVector<MultiArrayIndex, M> &d) const
705 {
706 return bindInner(d);
707 }
708
709 reference operator[](difference_type_1 d)
710 {
711 return operator[](scanOrderIndexToCoordinate(d));
712 }
713
714 const_reference operator[](difference_type_1 d) const
715 {
716 return operator[](scanOrderIndexToCoordinate(d));
717 }
718
719 difference_type scanOrderIndexToCoordinate(difference_type_1 d) const
720 {
721 difference_type coord(SkipInitialization);
722 detail::ScanOrderToCoordinate<actual_dimension>::exec(d, this->shape_, coord);
723 return coord;
724 }
725
726 /** convert coordinate to scan-order index.
727 */
728 difference_type_1 coordinateToScanOrderIndex(const difference_type &d) const
729 {
730 return detail::CoordinateToScanOrder<actual_dimension>::exec(this->shape_, d);
731 }
732
733 // /** 1D array access. Use only if N == 1.
734 // */
735 // reference operator() (difference_type_1 x)
736 // {
737 // VIGRA_ASSERT_INSIDE(difference_type(x));
738 // return m_ptr [detail::CoordinatesToOffest<StrideTag>::exec(m_stride, x)];
739 // }
740
741 // /** 2D array access. Use only if N == 2.
742 // */
743 // reference operator() (difference_type_1 x, difference_type_1 y)
744 // {
745 // VIGRA_ASSERT_INSIDE(difference_type(x, y));
746 // return m_ptr [detail::CoordinatesToOffest<StrideTag>::exec(m_stride, x, y)];
747 // }
748
749 // /** 3D array access. Use only if N == 3.
750 // */
751 // reference operator() (difference_type_1 x, difference_type_1 y, difference_type_1 z)
752 // {
753 // VIGRA_ASSERT_INSIDE(difference_type(x, y, z));
754 // return m_ptr [m_stride[0]*x + m_stride[1]*y + m_stride[2]*z];
755 // }
756
757 // /** 4D array access. Use only if N == 4.
758 // */
759 // reference operator() (difference_type_1 x, difference_type_1 y,
760 // difference_type_1 z, difference_type_1 u)
761 // {
762 // VIGRA_ASSERT_INSIDE(difference_type(x, y, z, u));
763 // return m_ptr [m_stride[0]*x + m_stride[1]*y + m_stride[2]*z + m_stride[3]*u];
764 // }
765
766 // /** 5D array access. Use only if N == 5.
767 // */
768 // reference operator() (difference_type_1 x, difference_type_1 y, difference_type_1 z,
769 // difference_type_1 u, difference_type_1 v)
770 // {
771 // VIGRA_ASSERT_INSIDE(difference_type(x, y,z, u,v));
772 // return m_ptr [m_stride[0]*x + m_stride[1]*y + m_stride[2]*z + m_stride[3]*u + m_stride[4]*v];
773 // }
774
775 // /** 1D const array access. Use only if N == 1.
776 // */
777 // const_reference operator() (difference_type_1 x) const
778 // {
779 // VIGRA_ASSERT_INSIDE(difference_type(x));
780 // return m_ptr [detail::CoordinatesToOffest<StrideTag>::exec(m_stride, x)];
781 // }
782
783 // /** 2D const array access. Use only if N == 2.
784 // */
785 // const_reference operator() (difference_type_1 x, difference_type_1 y) const
786 // {
787 // VIGRA_ASSERT_INSIDE(difference_type(x, y));
788 // return m_ptr [detail::CoordinatesToOffest<StrideTag>::exec(m_stride, x, y)];
789 // }
790
791 // /** 3D const array access. Use only if N == 3.
792 // */
793 // const_reference operator() (difference_type_1 x, difference_type_1 y, difference_type_1 z) const
794 // {
795 // VIGRA_ASSERT_INSIDE(difference_type(x,y,z));
796 // return m_ptr [m_stride[0]*x + m_stride[1]*y + m_stride[2]*z];
797 // }
798
799 // /** 4D const array access. Use only if N == 4.
800 // */
801 // const_reference operator() (difference_type_1 x, difference_type_1 y,
802 // difference_type_1 z, difference_type_1 u) const
803 // {
804 // VIGRA_ASSERT_INSIDE(difference_type(x,y,z,u));
805 // return m_ptr [m_stride[0]*x + m_stride[1]*y + m_stride[2]*z + m_stride[3]*u];
806 // }
807
808 // /** 5D const array access. Use only if N == 5.
809 // */
810 // const_reference operator() (difference_type_1 x, difference_type_1 y, difference_type_1 z,
811 // difference_type_1 u, difference_type_1 v) const
812 // {
813 // VIGRA_ASSERT_INSIDE(difference_type(x,y,z,u,v));
814 // return m_ptr [m_stride[0]*x + m_stride[1]*y + m_stride[2]*z + m_stride[3]*u + m_stride[4]*v];
815 // }
816
817 template <class U>
818 MultiArrayView & init(const U & init)
819 {
820 return operator=(init);
821 }
822
823 template <class U, class CN>
824 void copy(const MultiArrayView <N, U, CN>& rhs)
825 {
826 operator=(rhs);
827 }
828
829 template <class T2, class C2>
830 void swapData(MultiArrayView <N, T2, C2> rhs)
831 {
832 if(this == &rhs)
833 return;
834 vigra_precondition(this->shape() == rhs.shape(),
835 "MultiArrayView::swapData(): shape mismatch.");
836 iterator i = begin(), ie = end();
837 typename MultiArrayView<N, T2, C2>::iterator j = rhs.begin();
838 for(; i != ie; ++i, ++j)
839 std::swap(*i, *j);
840 }
841
842 bool isUnstrided(unsigned int dimension = N-1) const
843 {
844 if(chunks_.size() > 1)
845 return false;
846 difference_type s = vigra::detail::defaultStride<actual_dimension>(this->shape());
847 for(unsigned int k = 0; k <= dimension; ++k)
848 if(chunks_.data()->strides_[k] != s[k])
849 return false;
850 return true;
851 }
852
853 MultiArrayView<N-1, value_type, ChunkedArrayTag>
854 bindAt(MultiArrayIndex m, MultiArrayIndex d) const
855 {
856 MultiArrayView<N-1, value_type, ChunkedArrayTag> res(this->shape_.dropIndex(m), this->chunk_shape_.dropIndex(m));
857 res.offset_ = offset_.dropIndex(m);
858 res.bits_ = bits_.dropIndex(m);
859 res.mask_ = mask_.dropIndex(m);
860 res.chunks_.reshape(chunks_.shape().dropIndex(m));
861 res.unref_ = unref_;
862
863 typedef std::size_t UI;
864 UI start = offset_[m] + d;
865 UI chunk_start = start >> bits_[m];
866 UI startInChunk = start - chunk_start * this->chunk_shape_[m];
867
868 MultiArrayView<N-1, Chunk> view(chunks_.bindAt(m, chunk_start));
869 MultiCoordinateIterator<N-1> i(view.shape()),
870 end(i.getEndIterator());
871 for(; i != end; ++i)
872 {
873 res.chunks_[*i].pointer_ = view[*i].pointer_ + startInChunk*view[*i].strides_[m];
874 res.chunks_[*i].strides_ = view[*i].strides_.dropIndex(m);
875 }
876
877 return res;
878 }
879
880 template <unsigned int M>
881 MultiArrayView <N-1, value_type, ChunkedArrayTag>
882 bind (difference_type_1 d) const
883 {
884 return bindAt(M, d);
885 }
886
887 MultiArrayView <N-1, value_type, ChunkedArrayTag>
888 bindOuter (difference_type_1 d) const
889 {
890 return bindAt(N-1, d);
891 }
892
893 template <int M, class Index>
894 MultiArrayView <N-M, value_type, ChunkedArrayTag>
895 bindOuter(const TinyVector <Index, M> &d) const
896 {
897 return bindAt(N-1, d[M-1]).bindOuter(d.dropIndex(M-1));
898 }
899
900 template <class Index>
901 MultiArrayView <N-1, value_type, ChunkedArrayTag>
902 bindOuter(const TinyVector <Index, 1> &d) const
903 {
904 return bindAt(N-1, d[0]);
905 }
906
907 MultiArrayView <N-1, value_type, ChunkedArrayTag>
908 bindInner (difference_type_1 d) const
909 {
910 return bindAt(0, d);
911 }
912
913 template <int M, class Index>
914 MultiArrayView <N-M, value_type, ChunkedArrayTag>
915 bindInner(const TinyVector <Index, M> &d) const
916 {
917 return bindAt(0, d[0]).bindInner(d.dropIndex(0));
918 }
919
920 template <class Index>
921 MultiArrayView <N-1, value_type, ChunkedArrayTag>
922 bindInner(const TinyVector <Index, 1> &d) const
923 {
924 return bindAt(0, d[0]);
925 }
926
927 // MultiArrayView <N, typename ExpandElementResult<T>::type, StridedArrayTag>
928 // bindElementChannel(difference_type_1 i) const
929 // {
930 // vigra_precondition(0 <= i && i < ExpandElementResult<T>::size,
931 // "MultiArrayView::bindElementChannel(i): 'i' out of range.");
932 // return expandElements(0).bindInner(i);
933 // }
934
935 // MultiArrayView <N+1, typename ExpandElementResult<T>::type, StridedArrayTag>
936 // expandElements(difference_type_1 d) const;
937
938 // MultiArrayView <N+1, T, StrideTag>
939 // insertSingletonDimension (difference_type_1 i) const;
940
941 // MultiArrayView<N, Multiband<value_type>, StrideTag> multiband() const
942 // {
943 // return MultiArrayView<N, Multiband<value_type>, StrideTag>(*this);
944 // }
945
946 // MultiArrayView<1, T, StridedArrayTag> diagonal() const
947 // {
948 // return MultiArrayView<1, T, StridedArrayTag>(Shape1(vigra::min(m_shape)),
949 // Shape1(vigra::sum(m_stride)), m_ptr);
950 // }
951
952 inline void
953 checkSubarrayBounds(shape_type const & start, shape_type const & stop,
954 std::string message) const
955 {
956 message += ": subarray out of bounds.";
957 vigra_precondition(allLessEqual(shape_type(), start) &&
958 allLess(start, stop) &&
959 allLessEqual(stop, this->shape_),
960 message);
961 }
962
963 MultiArrayView<N, value_type, ChunkedArrayTag>
964 subarray(shape_type start, shape_type stop)
965 {
966 checkSubarrayBounds(start, stop, "MultiArrayView<N-1, T, ChunkedArrayTag>::subarray()");
967 start += offset_;
968 stop += offset_;
969 shape_type chunk_start(chunkStart(start));
970
971 MultiArrayView<N, value_type, ChunkedArrayTag> view(stop-start, this->chunk_shape_);
972 view.chunks_ = chunks_.subarray(chunk_start, chunkStop(stop));
973 view.offset_ = start - chunk_start * this->chunk_shape_;
974 view.bits_ = bits_;
975 view.mask_ = mask_;
976 view.unref_ = unref_;
977 return view;
978 }
979
980 // /** apply an additional striding to the image, thereby reducing
981 // the shape of the array.
982 // for example, multiplying the stride of dimension one by three
983 // turns an appropriately laid out (interleaved) rgb image into
984 // a single band image.
985 // */
986 // MultiArrayView <N, T, StridedArrayTag>
987 // stridearray (const difference_type &s) const
988 // {
989 // difference_type shape = m_shape;
990 // for (unsigned int i = 0; i < actual_dimension; ++i)
991 // shape [i] /= s [i];
992 // return MultiArrayView <N, T, StridedArrayTag>(shape, m_stride * s, m_ptr);
993 // }
994
995 MultiArrayView <N, value_type, ChunkedArrayTag>
996 transpose () const
997 {
998 return transpose(difference_type::linearSequence(N-1, -1));
999 }
1000
1001 MultiArrayView <N, value_type, ChunkedArrayTag>
1002 transpose(const difference_type &permutation) const
1003 {
1004 MultiArrayView<N, value_type, ChunkedArrayTag>
1005 view(vigra::transpose(this->shape_, permutation), vigra::transpose(this->chunk_shape_, permutation));
1006 view.chunks_ = chunks_.transpose(permutation); // also checks if permutation is valid
1007 view.offset_ = vigra::transpose(offset_, permutation);
1008 view.bits_ = vigra::transpose(bits_, permutation);
1009 view.mask_ = vigra::transpose(mask_, permutation);
1010 view.unref_ = unref_;
1011 typename MultiArray<N, Chunk>::iterator i = view.chunks_.begin(),
1012 iend = view.chunks_.end();
1013 for(; i != iend; ++i)
1014 i->strides_ = vigra::transpose(i->strides_, permutation);
1015 return view;
1016 }
1017
1018 // MultiArrayView <N, T, StridedArrayTag>
1019 // permuteDimensions (const difference_type &s) const;
1020
1021 // /** Permute the dimensions of the array so that the strides are in ascending order.
1022 // Determines the appropriate permutation and then calls permuteDimensions().
1023 // */
1024 // MultiArrayView <N, T, StridedArrayTag>
1025 // permuteStridesAscending() const;
1026
1027 // /** Permute the dimensions of the array so that the strides are in descending order.
1028 // Determines the appropriate permutation and then calls permuteDimensions().
1029 // */
1030 // MultiArrayView <N, T, StridedArrayTag>
1031 // permuteStridesDescending() const;
1032
1033 // /** Compute the ordering of the strides in this array.
1034 // The result is describes the current permutation of the axes relative
1035 // to the standard ascending stride order.
1036 // */
1037 // difference_type strideOrdering() const
1038 // {
1039 // return strideOrdering(m_stride);
1040 // }
1041
1042 // /** Compute the ordering of the given strides.
1043 // The result is describes the current permutation of the axes relative
1044 // to the standard ascending stride order.
1045 // */
1046 // static difference_type strideOrdering(difference_type strides);
1047
1048 template <class U, class C1>
1049 bool operator==(MultiArrayView<N, U, C1> const & rhs) const
1050 {
1051 if(this->shape() != rhs.shape())
1052 return false;
1053 const_iterator i = begin(), ie = end();
1054 typename MultiArrayView<N, U, C1>::const_iterator j = rhs.begin();
1055 for(; i != ie; ++i, ++j)
1056 if(*i != *j)
1057 return false;
1058 return true;
1059 }
1060
1061 template <class U, class C1>
1062 bool operator!=(MultiArrayView<N, U, C1> const & rhs) const
1063 {
1064 return !operator==(rhs);
1065 }
1066
1067 // bool all() const
1068 // {
1069 // bool res = true;
1070 // detail::reduceOverMultiArray(traverser_begin(), shape(),
1071 // res,
1072 // detail::AllTrueReduceFunctor(),
1073 // MetaInt<actual_dimension-1>());
1074 // return res;
1075 // }
1076
1077 // bool any() const
1078 // {
1079 // bool res = false;
1080 // detail::reduceOverMultiArray(traverser_begin(), shape(),
1081 // res,
1082 // detail::AnyTrueReduceFunctor(),
1083 // MetaInt<actual_dimension-1>());
1084 // return res;
1085 // }
1086
1087 // void minmax(T * minimum, T * maximum) const
1088 // {
1089 // std::pair<T, T> res(NumericTraits<T>::max(), NumericTraits<T>::min());
1090 // detail::reduceOverMultiArray(traverser_begin(), shape(),
1091 // res,
1092 // detail::MinmaxReduceFunctor(),
1093 // MetaInt<actual_dimension-1>());
1094 // *minimum = res.first;
1095 // *maximum = res.second;
1096 // }
1097
1098 // template <class U>
1099 // void meanVariance(U * mean, U * variance) const
1100 // {
1101 // typedef typename NumericTraits<U>::RealPromote R;
1102 // R zero = R();
1103 // triple<double, R, R> res(0.0, zero, zero);
1104 // detail::reduceOverMultiArray(traverser_begin(), shape(),
1105 // res,
1106 // detail::MeanVarianceReduceFunctor(),
1107 // MetaInt<actual_dimension-1>());
1108 // *mean = res.second;
1109 // *variance = res.third / res.first;
1110 // }
1111
1112 // template <class U>
1113 // U sum() const
1114 // {
1115 // U res = NumericTraits<U>::zero();
1116 // detail::reduceOverMultiArray(traverser_begin(), shape(),
1117 // res,
1118 // detail::SumReduceFunctor(),
1119 // MetaInt<actual_dimension-1>());
1120 // return res;
1121 // }
1122
1123 // template <class U, class S>
1124 // void sum(MultiArrayView<N, U, S> sums) const
1125 // {
1126 // transformMultiArray(srcMultiArrayRange(*this),
1127 // destMultiArrayRange(sums),
1128 // FindSum<U>());
1129 // }
1130
1131 // template <class U>
1132 // U product() const
1133 // {
1134 // U res = NumericTraits<U>::one();
1135 // detail::reduceOverMultiArray(traverser_begin(), shape(),
1136 // res,
1137 // detail::ProdReduceFunctor(),
1138 // MetaInt<actual_dimension-1>());
1139 // return res;
1140 // }
1141
1142 // typename NormTraits<MultiArrayView>::SquaredNormType
1143 // squaredNorm() const
1144 // {
1145 // typedef typename NormTraits<MultiArrayView>::SquaredNormType SquaredNormType;
1146 // SquaredNormType res = NumericTraits<SquaredNormType>::zero();
1147 // detail::reduceOverMultiArray(traverser_begin(), shape(),
1148 // res,
1149 // detail::SquaredL2NormReduceFunctor(),
1150 // MetaInt<actual_dimension-1>());
1151 // return res;
1152 // }
1153
1154 // typename NormTraits<MultiArrayView>::NormType
1155 // norm(int type = 2, bool useSquaredNorm = true) const;
1156
1157 bool hasData () const
1158 {
1159 return chunks_.hasData();
1160 }
1161
1162 iterator begin()
1163 {
1164 return createCoupledIterator(*this);
1165 }
1166
1167 iterator end()
1168 {
1169 return begin().getEndIterator();
1170 }
1171
1172 const_iterator cbegin() const
1173 {
1174 return createCoupledIterator(const_cast<MultiArrayView const &>(*this));
1175 }
1176
1177 const_iterator cend() const
1178 {
1179 return cbegin().getEndIterator();
1180 }
1181
1182 const_iterator begin() const
1183 {
1184 return createCoupledIterator(*this);
1185 }
1186
1187 const_iterator end() const
1188 {
1189 return begin().getEndIterator();
1190 }
1191
1192 chunk_iterator chunk_begin(shape_type const & start, shape_type const & stop)
1193 {
1194 checkSubarrayBounds(start, stop, "MultiArrayView<N-1, T, ChunkedArrayTag>::chunk_begin()");
1195 return chunk_iterator(this, start, stop, chunkStart(start), chunkStop(stop), this->chunk_shape_);
1196 }
1197
1198 chunk_iterator chunk_end(shape_type const & start, shape_type const & stop)
1199 {
1200 return chunk_begin(start, stop).getEndIterator();
1201 }
1202
1203 chunk_const_iterator chunk_begin(shape_type const & start, shape_type const & stop) const
1204 {
1205 checkSubarrayBounds(start, stop, "MultiArrayView<N-1, T, ChunkedArrayTag>::chunk_begin()");
1206 return chunk_const_iterator(this, start, stop, chunkStart(start), chunkStop(stop), this->chunk_shape_);
1207 }
1208
1209 chunk_const_iterator chunk_end(shape_type const & start, shape_type const & stop) const
1210 {
1211 return chunk_begin(start, stop).getEndIterator();
1212 }
1213
1214 chunk_const_iterator chunk_cbegin(shape_type const & start, shape_type const & stop) const
1215 {
1216 checkSubarrayBounds(start, stop, "MultiArrayView<N-1, T, ChunkedArrayTag>::chunk_cbegin()");
1217 return chunk_const_iterator(this, start, stop, chunkStart(start), chunkStop(stop), this->chunk_shape_);
1218 }
1219
1220 chunk_const_iterator chunk_cend(shape_type const & start, shape_type const & stop) const
1221 {
1222 return chunk_cbegin(start, stop).getEndIterator();
1223 }
1224
1225 view_type view ()
1226 {
1227 return *this;
1228 }
1229
1230 MultiArray<N, Chunk> chunks_;
1231 shape_type offset_, bits_, mask_;
1232 VIGRA_SHARED_PTR<ChunkUnrefProxyBase> unref_;
1233};
1234
1235template <unsigned int N, class T>
1236typename MultiArrayView<N, T, ChunkedArrayTag>::iterator
1237createCoupledIterator(MultiArrayView<N, T, ChunkedArrayTag> & m)
1238{
1239 typedef typename MultiArrayView<N, T, ChunkedArrayTag>::iterator IteratorType;
1240 typedef typename IteratorType::handle_type P1;
1241 typedef typename P1::base_type P0;
1242
1243 return IteratorType(P1(m,
1244 P0(m.shape())));
1245}
1246
1247template <unsigned int N, class T>
1248typename MultiArrayView<N, T, ChunkedArrayTag>::const_iterator
1249createCoupledIterator(MultiArrayView<N, T, ChunkedArrayTag> const & m)
1250{
1251 typedef typename MultiArrayView<N, T, ChunkedArrayTag>::const_iterator IteratorType;
1252 typedef typename IteratorType::handle_type P1;
1253 typedef typename P1::base_type P0;
1254
1255 return IteratorType(P1(m,
1256 P0(m.shape())));
1257}
1258
1259/** \addtogroup ChunkedArrayClasses Chunked arrays
1260
1261 Store big data (potentially larger than RAM) as a collection of rectangular blocks.
1262*/
1263//@{
1264
1265/** \brief Option object for \ref ChunkedArray construction.
1266*/
1268{
1269 public:
1270 /** \brief Initialize options with defaults.
1271 */
1273 : fill_value(0.0)
1274 , cache_max(-1)
1275 , compression_method(DEFAULT_COMPRESSION)
1276 {}
1277
1278 /** \brief Element value for read-only access of uninitialized chunks.
1279
1280 Default: 0
1281 */
1283 {
1284 fill_value = v;
1285 return *this;
1286 }
1287
1288 ChunkedArrayOptions fillValue(double v) const
1289 {
1290 return ChunkedArrayOptions(*this).fillValue(v);
1291 }
1292
1293 /** \brief Maximum number of chunks in the cache.
1294
1295 Default: -1 ( = use a heuristic depending on array shape)
1296 */
1298 {
1299 cache_max = v;
1300 return *this;
1301 }
1302
1303 ChunkedArrayOptions cacheMax(int v) const
1304 {
1305 return ChunkedArrayOptions(*this).cacheMax(v);
1306 }
1307
1308 /** \brief Compress inactive chunks with the given method.
1309
1310 Default: DEFAULT_COMPRESSION (depends on backend)
1311 */
1312 ChunkedArrayOptions & compression(CompressionMethod v)
1313 {
1314 compression_method = v;
1315 return *this;
1316 }
1317
1318 ChunkedArrayOptions compression(CompressionMethod v) const
1319 {
1320 return ChunkedArrayOptions(*this).compression(v);
1321 }
1322
1323 double fill_value;
1324 int cache_max;
1325 CompressionMethod compression_method;
1326};
1327
1328/** \weakgroup ParallelProcessing
1329 \sa ChunkedArray
1330 */
1331
1332/** \brief Interface and base class for chunked arrays.
1333
1334Very big data arrays (possibly bigger than the available RAM) can
1335only be processed in smaller pieces. To support quick access to
1336these pieces, it is advantegeous to store big arrays in chunks,
1337i.e. as a collection of small rectagular subarrays. The class
1338ChunkedArray encapsulates storage and handling of these chunks and
1339provides various APIs to easily access the data.
1340
1341<b>\#include</b> <vigra/multi_array_chunked.hxx> <br/>
1342Namespace: vigra
1343
1344@tparam N the array dimension
1345@tparam T the type of the array elements
1346
1347(these are the same as in \ref MultiArrayView). The actual way of chunk storage is determined by the derived class the program uses:
1348
1349<ul>
1350 <li>ChunkedArrayFull: Provides the chunked array API for a standard
1351 \ref MultiArray (i.e. there is only one chunk for the entire array).
1352
1353 <li>ChunkedArrayLazy: All chunks reside in memory, but are only
1354 allocated upon first access.
1355
1356 <li>ChunkedArrayCompressed: Like ChunkedArrayLazy, but temporarily
1357 unused chunks are compressed in memory to save space.
1358
1359 <li>ChunkedArrayTmpFile: Chunks are stored in a memory-mapped file.
1360 Temporarily unused chunks are written to the hard-drive and deleted from
1361 memory.
1362
1363 <li>ChunkedArrayHDF5: Chunks are stored in a HDF5 dataset by means of
1364 HDF5's native chunked storage capabilities. Temporarily unused chunks are
1365 written to the hard-drive in compressed form and deleted from memory.
1366</ul>
1367You must use these derived classes to construct a chunked array because
1368ChunkedArray itself is an abstract class.
1369
1370Chunks can be in one of the following states:
1371<ul>
1372 <li>uninitialized: Chunks are only initialized (i.e. allocated) upon the first
1373 write access. If an uninitialized chunk is accessed in a read-only manner, the
1374 system returns a pseudo-chunk whose elements have a user-provided fill value.
1375
1376 <li>asleep: The chunk is currently unused and has been compressed and/or
1377 swapped out to the hard drive.
1378
1379 <li>inactive: The chunk is currently unused, but still resides in memory.
1380
1381 <li>active: The chunk resides in memory and is currently in use.
1382
1383 <li>locked: Chunks are briefly in this state during transitions
1384 between the other states (e.g. while loading and/or decompression is
1385 in progress).
1386
1387 <li>failed: An unexpected error occured, e.g. the system is out of memory
1388 or a write to the hard drive failed.
1389</ul>
1390In-memory chunks (active and inactive) are placed in a cache. If a chunk
1391transitions from the 'asleep' to the 'active' state, it is added to the cache,
1392and an 'inactive' chunk is removed and sent 'asleep'. If there is no 'inactive'
1393chunk in the cache, the cache size is temporarily increased. All state
1394transitions are thread-safe.
1395
1396In order to optimize performance, the user should adjust the cache size (via
1397\ref setCacheMaxSize() or \ref ChunkedArrayOptions) so that it can hold all
1398chunks that are frequently needed (e.g. all chunks forming a row of the full
1399array).
1400
1401Another performance critical parameter is the chunk shape. While the system
1402uses sensible defaults (512<sup>2</sup> for 2D arrays, 64<sup>3</sup> for 3D,
140364x64x16x4 for 4D, and 64x64x16x4x4 for 5D), the shape may need to be adjusted
1404via the array's constructor to match the access patterns of the algorithms to
1405be used. For speed reasons, chunk shapes must be powers of 2.
1406
1407The data in the array can be accessed in several ways. The simplest is
1408via calls to <tt>checkoutSubarray()</tt> and <tt>commitSubarray()</tt>: These
1409functions copy an arbitrary subregion of a chunked array (possibly straddling
1410many chunks) into a standard \ref MultiArrayView for processing, and write
1411results back into the chunked array:
1412\code
1413 ChunkedArray<3, float> & chunked_array = ...;
1414
1415 Shape3 roi_start(1000, 500, 500);
1416 MultiArray<3, float> work_array(Shape3(100, 100, 100));
1417
1418 // copy data from region (1000,500,500)...(1100,600,600)
1419 chunked_array.checkoutSubarray(roi_start, work_array);
1420
1421 ... // work phase: process data in work_array as usual
1422
1423 // write results back into chunked_array
1424 chunked_array.commitSubarray(roi_start, work_array);
1425\endcode
1426The required chunks in <tt>chunked_array</tt> will only be active while the
1427checkout and commit calls are executing. During the work phase, other threads
1428can use the chunked array's cache to checkout or commit different subregions.
1429
1430Alternatively, one can work directly on the chunk storage. This is most easily
1431achieved by means of chunk iterators:
1432\code
1433 ChunkedArray<3, float> & chunked_array = ...;
1434
1435 // define the ROI to be processed
1436 Shape3 roi_start(100, 200, 300), roi_end(1000, 2000, 600);
1437
1438 // get a pair of chunk iterators ( = iterators over chunks)
1439 auto chunk = chunked_array.chunk_begin(roi_start, roi_end),
1440 end = chunked_array.chunk_end(roi_start, roi_end);
1441
1442 // iterate over the chunks in the ROI
1443 for(; chunk != end; ++chunk)
1444 {
1445 // get a view to the current chunk's data
1446 // Note: The view actually refers to the intersection of the
1447 // current chunk with the ROI. Thus, chunks which are
1448 // partially outside the ROI are appropriately trimmed.
1449 MultiArrayView<3, float> chunk_view = *chunk;
1450
1451 ... // work phase: process data in chunk_view as usual
1452 }
1453\endcode
1454No memory is duplicated in this approach, and only the current chunk needs
1455to be active, so that a small chunk cache is sufficient. The iteration
1456over chunks can be distributed over several threads that process the array
1457data in parallel. The programmer must make sure that write operations to
1458individual elements are synchronized between threads. This is usually
1459achieved by ensuring that the threads are responsible for non-overlapping
1460regions of the output array.
1461
1462An even simpler method is direct element access via indexing. However, the
1463chunked array has no control over the access order in this case, so it must
1464potentially activate the present chunk upon each access. This is rather
1465expensive and should only be used for debugging:
1466\code
1467 ChunkedArray<3, float> & chunked_array = ...;
1468
1469 Shape3 index(100, 200, 300);
1470 // access data at coordinate 'index'
1471 chunked_array.setItem(index, chunked_array.getItem(index) + 2.0);
1472\endcode
1473
1474Two additional APIs provide access in a way compatible with an ordinary
1475\ref MultiArrayView. These APIs should be used in functions that are
1476supposed to work unchanged on both ordinary and chunked arrays. The first
1477possibility is the chunked scan-order iterator:
1478\code
1479 ChunkedArray<3, float> & chunked_array = ...;
1480
1481 // get a pair of scan-order iterators ( = iterators over elements)
1482 auto iter = chunked_array.begin(),
1483 end = chunked_array.end();
1484
1485 // iterate over all array elements
1486 for(; iter != end; ++iter)
1487 {
1488 // access current element
1489 *iter = *iter + 2.0;
1490 }
1491\endcode
1492A new chunk must potentially be activated whenever the iterator crosses
1493a chunk boundary. Since the overhead of the activation operation can be
1494amortized over many within-chunk steps, the iteration (excluding the
1495workload within the loop) takes only twice as long as the iteration over an
1496unstrided array using an ordinary \ref StridedScanOrderIterator.
1497
1498The final possibility is the creation of a MultiArrayView that accesses
1499an arbitrary ROI directly:
1500\code
1501 ChunkedArray<3, float> & chunked_array = ...;
1502
1503 // define the ROI to be processed
1504 Shape3 roi_start(100, 200, 300), roi_end(1000, 2000, 600);
1505
1506 // create view for ROI
1507 MultiArrayView<3, float, ChunkedArrayTag> view =
1508 chunked_array.subarray(roi_start, roi_stop);
1509
1510 ... // work phase: process view like any ordinary MultiArrayView
1511\endcode
1512Similarly, a lower-dimensional view can be created with one of the
1513<tt>bind</tt> functions. This approach has the advantage that 'view'
1514can be passed to any function which is implemented in terms of
1515MultiArrayViews. However, there are two disadvantages: First, data access
1516in the view requires two steps (first find the chunk, then find the
1517appropriate element in the chunk), which causes the chunked view to
1518be slower than an ordinary MultiArrayView. Second, all chunks intersected
1519by the view must remain active throughout the view's lifetime, which
1520may require a big chunk cache and thus keeps many chunks in memory.
1521*/
1522template <unsigned int N, class T>
1523class ChunkedArray
1524: public ChunkedArrayBase<N, T>
1525{
1526 /*
1527 FIXME:
1528 * backends:
1529 * allocators are not used
1530 * HDF5 only works for scalar types so far
1531 * HDF5 must support read-only and read/write mode
1532 * temp file arrays in swap (just an API addition to the constructor)
1533 * support TIFF chunked reading
1534 * the array implementations should go into cxx files in src/impex
1535 * this requires implementation of the low-level functions independently of dtype
1536 (use 'char *' and multiply shape and stride with sizeof(T))
1537 * don't forget to increment the soversion after the change
1538 * alternative: provide 'config_local.hxx' with flags for available packages
1539 * decide on chunk locking policies for array views (in particular, for index access)
1540 * array view has functions fetch()/release() (better names?) to lock/unlock
1541 _all_ chunks in the view
1542 * release() is automatically called in the destructor
1543 * it should be possible to call fetch in the constructor via a flag,
1544 but should the constructor fetch by default?
1545 * how should fetch() handle the case when the cache is too small
1546 * throw an exception?
1547 * silently enlarge the cache?
1548 * temporarily enlarge the cache?
1549 * provide an option to control the behavior?
1550 * also provide copySubarray() with ReadOnly and ReadWrite flags, where
1551 ReadWrite copies the subarray back in the destructor or on demand
1552 * locking is only required while each slice is copied
1553 * the copy functions can use normal array views and iterators
1554 * the ReadWrite version can store a checksum for each chunk (or part
1555 of a chunk) to detect collisions on write
1556 * use shared pointers to support memory management of the subarrays?
1557 * find efficient ways to support slicing and transposition in the indexing
1558 functions of a view.
1559 1. possibility: each view contains
1560 * an index object 'bound_index_' with original dimension whose values denote
1561 coordinates of bound axes and offsets for unbound coordinates
1562 * a permutation object 'permutation_' with dimension of the view that maps
1563 view coordinates to original coordinates
1564 * that is:
1565 operator[](index)
1566 {
1567 shape_type full_index(bound_index_);
1568 for(int k=0; k<N_view; ++k)
1569 full_index[permutation_[k]] += index[k];
1570 split full_index into chunk part and local part
1571 look up chunk
1572 return pixel
1573 }
1574 * maybe this is faster if it is combined with the stride computation?
1575 * an optimization for unsliced arrays is desirable
1576 2. possibility:
1577 * add the point offset to the low-dimensional index
1578 * split low-dimensional index into chunk part and local part
1579 * look up chunk
1580 * determine scalar pointer offset from local part and strides plus a
1581 chunk-specific correction that can be stored in a 3^N array
1582 - but can we efficiently determine where to look in that offset array?
1583 3. possibility:
1584 * don't care about speed - require copySubarray() if indexing should
1585 be fast
1586 * provide a ChunkIterator that iterates over all chunks in a given ROI and returns a
1587 MultiArrayView for the present chunk (which remains locked in cache until the
1588 iterator is advanced).
1589 * implement proper copy constructors and assignment for all backends
1590 * test HDF5 constructor from existing dataset
1591 * put HDF5 into header of its own
1592 * is the full chunkForIterator() function slow? Check this with a simplified one
1593 in a ChunkedArrayLazy where all chunlks are already implemented, so that
1594 we can simply can skip the check
1595 * add support for Multiband and TinyVector pixels
1596
1597 */
1598
1599 public:
1600 typedef ChunkedArrayBase<N, T> base_type;
1601 typedef typename MultiArrayShape<N>::type shape_type;
1602 typedef typename shape_type::value_type difference_type_1;
1603 typedef T value_type;
1604 typedef value_type * pointer;
1605 typedef value_type const * const_pointer;
1606 typedef value_type & reference;
1607 typedef value_type const & const_reference;
1608 typedef ChunkIterator<N, T> chunk_iterator;
1609 typedef ChunkIterator<N, T const> chunk_const_iterator;
1610 typedef StridedScanOrderIterator<N, ChunkedMemory<T>, reference, pointer> iterator;
1611 typedef StridedScanOrderIterator<N, ChunkedMemory<T const>, const_reference, const_pointer> const_iterator;
1612 typedef SharedChunkHandle<N, T> Handle;
1613 typedef ChunkBase<N, T> Chunk;
1614 typedef MultiArrayView<N, T, ChunkedArrayTag> view_type;
1615 typedef MultiArrayView<N, T const, ChunkedArrayTag> const_view_type;
1616 typedef std::queue<Handle*> CacheType;
1617
1618 static const long chunk_asleep = Handle::chunk_asleep;
1619 static const long chunk_uninitialized = Handle::chunk_uninitialized;
1620 static const long chunk_locked = Handle::chunk_locked;
1621 static const long chunk_failed = Handle::chunk_failed;
1622
1623 // constructor only called by derived classes (ChunkedArray is abstract)
1624 explicit ChunkedArray(shape_type const & shape,
1625 shape_type const & chunk_shape = shape_type(),
1626 ChunkedArrayOptions const & options = ChunkedArrayOptions())
1627 : ChunkedArrayBase<N, T>(shape, chunk_shape)
1628 , bits_(initBitMask(this->chunk_shape_))
1629 , mask_(this->chunk_shape_ -shape_type(1))
1630 , cache_max_size_(options.cache_max)
1631 , chunk_lock_(new threading::mutex())
1632 , fill_value_(T(options.fill_value))
1633 , fill_scalar_(options.fill_value)
1634 , handle_array_(detail::computeChunkArrayShape(shape, bits_, mask_))
1635 , data_bytes_()
1636 , overhead_bytes_(handle_array_.size()*sizeof(Handle))
1637 {
1638 fill_value_chunk_.pointer_ = &fill_value_;
1639 fill_value_handle_.pointer_ = &fill_value_chunk_;
1640 fill_value_handle_.chunk_state_.store(1);
1641 }
1642
1643 // compute masks needed for fast index access
1644 static shape_type initBitMask(shape_type const & chunk_shape)
1645 {
1646 shape_type res;
1647 for(unsigned int k=0; k<N; ++k)
1648 {
1649 UInt32 bits = log2i(chunk_shape[k]);
1650 vigra_precondition(chunk_shape[k] == MultiArrayIndex(1 << bits),
1651 "ChunkedArray: chunk_shape elements must be powers of 2.");
1652 res[k] = bits;
1653 }
1654 return res;
1655 }
1656
1657 virtual ~ChunkedArray()
1658 {
1659 // std::cerr << " final cache size: " << cacheSize() << " (max: " << cacheMaxSize() << ")\n";
1660 }
1661
1662 /** \brief Number of chunks currently fitting into the cache.
1663 */
1664 int cacheSize() const
1665 {
1666 return cache_.size();
1667 }
1668
1669 /** \brief Bytes of main memory occupied by the array's data.
1670
1671 Compressed chunks are only counted with their compressed size.
1672 Chunks swapped out to the hard drive are not counted.
1673 */
1674 std::size_t dataBytes() const
1675 {
1676 return data_bytes_;
1677 }
1678
1679 /** \brief Bytes of main memory needed to manage the chunked storage.
1680 */
1681 std::size_t overheadBytes() const
1682 {
1683 return overhead_bytes_;
1684 }
1685
1686 /** \brief Number of chunks along each coordinate direction.
1687 */
1689 {
1690 return handle_array_.shape();
1691 }
1692
1693 virtual std::size_t dataBytes(Chunk * c) const = 0;
1694
1695 /** \brief Number of data bytes in an uncompressed chunk.
1696 */
1697 std::size_t dataBytesPerChunk() const
1698 {
1699 return prod(this->chunk_shape_)*sizeof(T);
1700 }
1701
1702 /** \brief Bytes of main memory needed to manage a single chunk.
1703 */
1704 virtual std::size_t overheadBytesPerChunk() const = 0;
1705
1706 /** \brief Find the chunk that contains array element 'global_start'.
1707 */
1708 shape_type chunkStart(shape_type const & global_start) const
1709 {
1710 shape_type chunk_start(SkipInitialization);
1711 detail::ChunkIndexing<N>::chunkIndex(global_start, bits_, chunk_start);
1712 return chunk_start;
1713 }
1714
1715 /** \brief Find the chunk that is beyond array element 'global_stop'.
1716
1717 Specifically, this computes
1718 \code
1719 chunkStart(global_stop - shape_type(1)) + shape_type(1)
1720 \endcode
1721 */
1723 {
1724 global_stop -= shape_type(1);
1725 shape_type chunk_stop(SkipInitialization);
1726 detail::ChunkIndexing<N>::chunkIndex(global_stop, bits_, chunk_stop);
1727 chunk_stop += shape_type(1);
1728 return chunk_stop;
1729 }
1730
1731 /** \brief Find the shape of the chunk indexed by 'chunk_index'.
1732
1733 This may differ from the global chunk shape because chunks at the
1734 right/lower border of the array may be smaller than usual.
1735 */
1736 shape_type chunkShape(shape_type const & chunk_index) const
1737 {
1738 return min(this->chunk_shape_,
1739 this->shape_ - chunk_index*this->chunk_shape_);
1740 }
1741
1742 using base_type::chunkShape;
1743
1744#ifdef DOXYGEN
1745 /** \brief Return the global chunk shape.
1746
1747 This is the shape of all chunks that are completely contained
1748 in the array's domain.
1749 */
1750 shape_type const & chunkShape() const;
1751
1752 /** \brief Return the shape in this array.
1753 */
1754 shape_type const & shape() const;
1755
1756 /** \brief Return the number of elements in this array.
1757 */
1759
1760 /** \brief Check if the given point is in the array domain.
1761 */
1762 bool isInside(shape_type const & p) const;
1763
1764 /** \brief Return the class that implements this ChunkedArray.
1765 */
1766 std::string backend() const;
1767
1768#endif
1769
1770 inline void
1771 checkSubarrayBounds(shape_type const & start, shape_type const & stop,
1772 std::string message) const
1773 {
1774 message += ": subarray out of bounds.";
1775 vigra_precondition(allLessEqual(shape_type(), start) &&
1776 allLess(start, stop) &&
1777 allLessEqual(stop, this->shape_),
1778 message);
1779 }
1780
1781 /** \brief Check if two arrays are elementwise equal.
1782 */
1783 template <class U, class C1>
1784 bool operator==(MultiArrayView<N, U, C1> const & rhs) const
1785 {
1786 if(this->shape() != rhs.shape())
1787 return false;
1788 const_iterator i = begin(), ie = end();
1789 typename MultiArrayView<N, U, C1>::const_iterator j = rhs.begin();
1790 for(; i != ie; ++i, ++j)
1791 if(*i != *j)
1792 return false;
1793 return true;
1794 }
1795
1796 /** \brief Check if two arrays differ in at least one element.
1797 */
1798 template <class U, class C1>
1799 bool operator!=(MultiArrayView<N, U, C1> const & rhs) const
1800 {
1801 return !operator==(rhs);
1802 }
1803
1804 // internal function to activate a chunk
1805 virtual pointer loadChunk(Chunk ** chunk, shape_type const & chunk_index) = 0;
1806
1807 // internal function to send a chunk asleep or delete it
1808 // entirely (when destroy = true).
1809 // returns true if the chunk was deleted, false otherwise
1810 virtual bool unloadHandle(Handle * handle, bool destroy = false)
1811 {
1812 if(handle == &fill_value_handle_)
1813 return false;
1814 return unloadChunk(handle->pointer_, destroy);
1815 }
1816
1817 virtual bool unloadChunk(Chunk * chunk, bool destroy = false) = 0;
1818
1819 Handle * lookupHandle(shape_type const & index)
1820 {
1821 return &handle_array_[index];
1822 }
1823
1824 // Decrease the reference counter of the given chunk.
1825 // Will inactivate the chunk when reference counter reaches zero.
1826 virtual void unrefChunk(IteratorChunkHandle<N, T> * h) const
1827 {
1828 unrefChunk(h->chunk_);
1829 h->chunk_ = 0;
1830 }
1831
1832 // Likewise
1833 void unrefChunk(Handle * chunk) const
1834 {
1835 if(chunk)
1836 {
1837 long rc = chunk->chunk_state_.fetch_sub(1);
1838 ignore_argument(rc);
1839 #ifdef VIGRA_CHECK_BOUNDS
1840 vigra_invariant(rc >= 0,
1841 "ChunkedArray::unrefChunk(): chunk refcount got negative!");
1842 #endif
1843 }
1844 }
1845
1846 // Decrease the reference counter of several chunks simultaneously.
1847 void unrefChunks(ArrayVector<Handle*> const & chunks)
1848 {
1849 for(unsigned int k=0; k<chunks.size(); ++k)
1850 unrefChunk(chunks[k]);
1851
1852 if(cacheMaxSize() > 0)
1853 {
1854 threading::lock_guard<threading::mutex> guard(*chunk_lock_);
1855 cleanCache(cache_.size());
1856 }
1857 }
1858
1859 // Increase the reference counter of the given chunk.
1860 // If the chunk was asleep, the function first awakens it.
1861 long acquireRef(Handle * handle) const
1862 {
1863 // Obtain a reference to the current chunk handle.
1864 // We use a simple spin-lock here because it is very fast in case of success,
1865 // and failures (i.e. collisions with another thread) are presumably
1866 // very rare.
1867 //
1868 // the function returns the old value of chunk_state_
1869 long rc = handle->chunk_state_.load(threading::memory_order_acquire);
1870 while(true)
1871 {
1872 if(rc >= 0)
1873 {
1874 if(handle->chunk_state_.compare_exchange_weak(rc, rc+1, threading::memory_order_seq_cst))
1875 {
1876 return rc;
1877 }
1878 }
1879 else
1880 {
1881 if(rc == chunk_failed)
1882 {
1883 vigra_precondition(false,
1884 "ChunkedArray::acquireRef() attempt to access failed chunk.");
1885 }
1886 else if(rc == chunk_locked)
1887 {
1888 // cache management in progress => try again later
1889 threading::this_thread::yield();
1890 rc = handle->chunk_state_.load(threading::memory_order_acquire);
1891 }
1892 else if(handle->chunk_state_.compare_exchange_weak(rc, chunk_locked, threading::memory_order_seq_cst))
1893 {
1894 return rc;
1895 }
1896 }
1897 }
1898 }
1899
1900 pointer
1901 getChunk(Handle * handle, bool isConst, bool insertInCache, shape_type const & chunk_index) const
1902 {
1903 ChunkedArray * self = const_cast<ChunkedArray *>(this);
1904
1905 long rc = acquireRef(handle);
1906 if(rc >= 0)
1907 return handle->pointer_->pointer_;
1908
1909 threading::lock_guard<threading::mutex> guard(*chunk_lock_);
1910 try
1911 {
1912 T * p = self->loadChunk(&handle->pointer_, chunk_index);
1913 Chunk * chunk = handle->pointer_;
1914 if(!isConst && rc == chunk_uninitialized)
1915 std::fill(p, p + prod(chunkShape(chunk_index)), this->fill_value_);
1916
1917 self->data_bytes_ += dataBytes(chunk);
1918
1919 if(cacheMaxSize() > 0 && insertInCache)
1920 {
1921 // insert in queue of mapped chunks
1922 self->cache_.push(handle);
1923
1924 // do cache management if cache is full
1925 // (note that we still hold the chunk_lock_)
1926 self->cleanCache(2);
1927 }
1928 handle->chunk_state_.store(1, threading::memory_order_release);
1929 return p;
1930 }
1931 catch(...)
1932 {
1933 handle->chunk_state_.store(chunk_failed);
1934 throw;
1935 }
1936 }
1937
1938 // helper function for chunkForIterator()
1939 inline pointer
1940 chunkForIteratorImpl(shape_type const & point,
1941 shape_type & strides, shape_type & upper_bound,
1942 IteratorChunkHandle<N, T> * h,
1943 bool isConst) const
1944 {
1945 ChunkedArray * self = const_cast<ChunkedArray *>(this);
1946
1947 unrefChunk(h->chunk_);
1948 h->chunk_ = 0;
1949
1950 shape_type global_point = point + h->offset_;
1951
1952 if(!this->isInside(global_point))
1953 {
1954 upper_bound = point + this->chunk_shape_;
1955 return 0;
1956 }
1957
1958 shape_type chunkIndex(chunkStart(global_point));
1959
1960 bool insertInCache = true;
1961 Handle * handle = self->lookupHandle(chunkIndex);
1962 if(isConst && handle->chunk_state_.load() == chunk_uninitialized)
1963 {
1964 handle = &self->fill_value_handle_;
1965 insertInCache = false;
1966 }
1967
1968 pointer p = getChunk(handle, isConst, insertInCache, chunkIndex);
1969 strides = handle->strides();
1970 upper_bound = (chunkIndex + shape_type(1)) * this->chunk_shape_ - h->offset_;
1971 std::size_t offset = detail::ChunkIndexing<N>::offsetInChunk(global_point, mask_, strides);
1972 h->chunk_ = handle;
1973 return p + offset;
1974 }
1975
1976 // called by chunked scan-order iterator to obtain the new data pointer
1977 // when the iterator enters a new chunk
1978 virtual pointer chunkForIterator(shape_type const & point,
1979 shape_type & strides, shape_type & upper_bound,
1980 IteratorChunkHandle<N, T> * h)
1981 {
1982 return chunkForIteratorImpl(point, strides, upper_bound, h, false);
1983 }
1984
1985 virtual pointer chunkForIterator(shape_type const & point,
1986 shape_type & strides, shape_type & upper_bound,
1987 IteratorChunkHandle<N, T> * h) const
1988 {
1989 return chunkForIteratorImpl(point, strides, upper_bound, h, true);
1990 }
1991
1992 // NOTE: This function must only be called while we hold the chunk_lock_.
1993 // This implies refcount != chunk_locked, so that race conditions are avoided.
1994 long releaseChunk(Handle * handle, bool destroy = false)
1995 {
1996 long rc = 0;
1997 bool mayUnload = handle->chunk_state_.compare_exchange_strong(rc, chunk_locked);
1998 if(!mayUnload && destroy)
1999 {
2000 rc = chunk_asleep;
2001 mayUnload = handle->chunk_state_.compare_exchange_strong(rc, chunk_locked);
2002 }
2003 if(mayUnload)
2004 {
2005 // refcount was zero or chunk_asleep => can unload
2006 try
2007 {
2008 vigra_invariant(handle != &fill_value_handle_,
2009 "ChunkedArray::releaseChunk(): attempt to release fill_value_handle_.");
2010 Chunk * chunk = handle->pointer_;
2011 this->data_bytes_ -= dataBytes(chunk);
2012 int didDestroy = unloadChunk(chunk, destroy);
2013 this->data_bytes_ += dataBytes(chunk);
2014 if(didDestroy)
2015 handle->chunk_state_.store(chunk_uninitialized);
2016 else
2017 handle->chunk_state_.store(chunk_asleep);
2018 }
2019 catch(...)
2020 {
2021 handle->chunk_state_.store(chunk_failed);
2022 throw;
2023 }
2024 }
2025 return rc;
2026 }
2027
2028 // NOTE: this function must only be called while we hold the chunk_lock_
2029 void cleanCache(int how_many = -1)
2030 {
2031 if(how_many == -1)
2032 how_many = cache_.size();
2033 for(; cache_.size() > cacheMaxSize() && how_many > 0; --how_many)
2034 {
2035 Handle * handle = cache_.front();
2036 cache_.pop();
2037 long rc = releaseChunk(handle);
2038 if(rc > 0) // refcount was positive => chunk is still needed
2039 cache_.push(handle);
2040 }
2041 }
2042
2043 /** Sends all chunks asleep which are completely inside the given ROI.
2044 If destroy == true and the backend supports destruction (currently:
2045 ChunkedArrayLazy and ChunkedArrayCompressed), chunks will be deleted
2046 entirely. The chunk's contents after releaseChunks() are undefined.
2047 Currently, chunks retain their values when sent asleep, and assume the
2048 array's fill_value when deleted, but applications should not rely on this
2049 behavior.
2050 */
2051 void releaseChunks(shape_type const & start, shape_type const & stop, bool destroy = false)
2052 {
2053 checkSubarrayBounds(start, stop, "ChunkedArray::releaseChunks()");
2054
2055 MultiCoordinateIterator<N> i(chunkStart(start), chunkStop(stop)),
2056 end(i.getEndIterator());
2057 for(; i != end; ++i)
2058 {
2059 shape_type chunkOffset = *i * this->chunk_shape_;
2060 if(!allLessEqual(start, chunkOffset) ||
2061 !allLessEqual(min(chunkOffset+this->chunk_shape_, this->shape()), stop))
2062 {
2063 // chunk is only partially covered by the ROI
2064 continue;
2065 }
2066
2067 Handle * handle = this->lookupHandle(*i);
2068 threading::lock_guard<threading::mutex> guard(*chunk_lock_);
2069 releaseChunk(handle, destroy);
2070 }
2071
2072 // remove all chunks from the cache that are asleep or unitialized
2073 threading::lock_guard<threading::mutex> guard(*chunk_lock_);
2074 int cache_size = cache_.size();
2075 for(int k=0; k < cache_size; ++k)
2076 {
2077 Handle * handle = cache_.front();
2078 cache_.pop();
2079 if(handle->chunk_state_.load() >= 0)
2080 cache_.push(handle);
2081 }
2082 }
2083
2084 /** \brief Copy an ROI of the chunked array into an ordinary MultiArrayView.
2085
2086 The ROI's lower bound is given by 'start', its upper bound (in 'beyond' sense)
2087 is 'start + subarray.shape()'. Chunks in the ROI are only activated while
2088 the read is in progress.
2089 */
2090 template <class U, class Stride>
2091 void
2093 MultiArrayView<N, U, Stride> & subarray) const
2094 {
2095 shape_type stop = start + subarray.shape();
2096
2097 checkSubarrayBounds(start, stop, "ChunkedArray::checkoutSubarray()");
2098
2099 chunk_const_iterator i = chunk_cbegin(start, stop);
2100 for(; i.isValid(); ++i)
2101 {
2102 subarray.subarray(i.chunkStart()-start, i.chunkStop()-start) = *i;
2103 }
2104 }
2105
2106 /** \brief Copy an ordinary MultiArrayView into an ROI of the chunked array.
2107
2108 The ROI's lower bound is given by 'start', its upper bound (in 'beyond' sense)
2109 is 'start + subarray.shape()'. Chunks in the ROI are only activated while
2110 the write is in progress.
2111 */
2112 template <class U, class Stride>
2113 void
2115 MultiArrayView<N, U, Stride> const & subarray)
2116 {
2117 shape_type stop = start + subarray.shape();
2118
2119 vigra_precondition(!this->isReadOnly(),
2120 "ChunkedArray::commitSubarray(): array is read-only.");
2121 checkSubarrayBounds(start, stop, "ChunkedArray::commitSubarray()");
2122
2123 chunk_iterator i = chunk_begin(start, stop);
2124 for(; i.isValid(); ++i)
2125 {
2126 *i = subarray.subarray(i.chunkStart()-start, i.chunkStop()-start);
2127 }
2128 }
2129
2130 // helper function for subarray()
2131 template <class View>
2132 void subarrayImpl(shape_type const & start, shape_type const & stop,
2133 View & view,
2134 bool isConst) const
2135 {
2136 vigra_precondition(isConst || !this->isReadOnly(),
2137 "ChunkedArray::subarray(): array is read-only.");
2138 checkSubarrayBounds(start, stop, "ChunkedArray::subarray()");
2139 shape_type chunk_start(chunkStart(start)), chunk_stop(chunkStop(stop));
2140
2141 view.shape_ = stop-start;
2142 view.chunk_shape_ = this->chunk_shape_;
2143 view.chunks_.reshape(chunk_stop-chunk_start);
2144 view.offset_ = start - chunk_start * this->chunk_shape_;
2145 view.bits_ = bits_;
2146 view.mask_ = mask_;
2147
2148 typedef typename View::UnrefProxy Unref;
2149 ChunkedArray* self = const_cast<ChunkedArray*>(this);
2150 Unref * unref = new Unref(view.chunks_.size(), self);
2151 view.unref_ = VIGRA_SHARED_PTR<Unref>(unref);
2152
2153 MultiCoordinateIterator<N> i(chunk_start, chunk_stop),
2154 end(i.getEndIterator());
2155 for(; i != end; ++i)
2156 {
2157 Handle * handle = self->lookupHandle(*i);
2158
2159 if(isConst && handle->chunk_state_.load() == chunk_uninitialized)
2160 handle = &self->fill_value_handle_;
2161
2162 // This potentially acquires the chunk_lock_ in each iteration.
2163 // Would it be better to acquire it once before the loop?
2164 pointer p = getChunk(handle, isConst, true, *i);
2165
2166 ChunkBase<N, T> * mini_chunk = &view.chunks_[*i - chunk_start];
2167 mini_chunk->pointer_ = p;
2168 mini_chunk->strides_ = handle->strides();
2169 unref->chunks_[i.scanOrderIndex()] = handle;
2170 }
2171 }
2172
2173 /** \brief Create a view to the specified ROI.
2174
2175 The view can be used like an ordinary \ref MultiArrayView, but is
2176 a but slower. All chunks intersecting the view remain active
2177 throughout the view's lifetime.
2178 */
2179 view_type
2180 subarray(shape_type const & start, shape_type const & stop)
2181 {
2182 view_type view;
2183 subarrayImpl(start, stop, view, false);
2184 return view;
2185 }
2186
2187 /** \brief Create a read-only view to the specified ROI.
2188
2189 The view can be used like an ordinary \ref MultiArrayView, but is
2190 a but slower. All chunks intersecting the view remain active
2191 throughout the view's lifetime.
2192 */
2193 const_view_type
2194 subarray(shape_type const & start, shape_type const & stop) const
2195 {
2196 const_view_type view;
2197 subarrayImpl(start, stop, view, true);
2198 return view;
2199 }
2200
2201 /** \brief Create a read-only view to the specified ROI.
2202
2203 The view can be used like an ordinary \ref MultiArrayView, but is
2204 a but slower. All chunks intersecting the view remain active
2205 throughout the view's lifetime.
2206 */
2207 const_view_type
2208 const_subarray(shape_type const & start, shape_type const & stop) const
2209 {
2210 const_view_type view;
2211 subarrayImpl(start, stop, view, true);
2212 return view;
2213 }
2214
2215 /** \brief Read the array element at index 'point'.
2216
2217 Since the corresponding chunk must potentially be activated
2218 first, this function may be slow and should mainly be used in
2219 debugging.
2220 */
2221 value_type getItem(shape_type const & point) const
2222 {
2223 vigra_precondition(this->isInside(point),
2224 "ChunkedArray::getItem(): index out of bounds.");
2225
2226 ChunkedArray * self = const_cast<ChunkedArray*>(this);
2227 shape_type chunk_index(chunkStart(point));
2228 Handle * handle = self->lookupHandle(chunk_index);
2229 if(handle->chunk_state_.load() == chunk_uninitialized)
2230 return fill_value_;
2231 pointer p = self->getChunk(handle, true, false, chunk_index);
2232 value_type res = *(p +
2233 detail::ChunkIndexing<N>::offsetInChunk(point, mask_, handle->strides()));
2234 self->unrefChunk(handle);
2235 return res;
2236 }
2237
2238 /** \brief Write the array element at index 'point'.
2239
2240 Since the corresponding chunk must potentially be activated
2241 first, this function may be slow and should mainly be used in
2242 debugging.
2243 */
2244 void setItem(shape_type const & point, value_type const & v)
2245 {
2246 vigra_precondition(!this->isReadOnly(),
2247 "ChunkedArray::setItem(): array is read-only.");
2248 vigra_precondition(this->isInside(point),
2249 "ChunkedArray::setItem(): index out of bounds.");
2250
2251 shape_type chunk_index(chunkStart(point));
2252 Handle * handle = lookupHandle(chunk_index);
2253 pointer p = getChunk(handle, false, false, chunk_index);
2254 *(p + detail::ChunkIndexing<N>::offsetInChunk(point, mask_, handle->strides())) = v;
2255 unrefChunk(handle);
2256 }
2257
2258 /** \brief Create a lower dimensional view to the chunked array.
2259
2260 Dimension 'dim' is bound at 'index', all other dimensions remain
2261 unchanged. All chunks intersecting the view remain active
2262 throughout the view's lifetime.
2263 */
2266 {
2267 shape_type start, stop(this->shape());
2268 start[dim] = index;
2269 stop[dim] = index+1;
2270 return subarray(start, stop).bindAt(dim, 0);
2271 }
2272
2273 /** \brief Create a lower dimensional view to the chunked array.
2274
2275 Dimension 'M' (given as a template parameter) is bound at 'index',
2276 all other dimensions remain unchanged. All chunks intersecting the
2277 view remain active throughout the view's lifetime.
2278 */
2279 template <unsigned int M>
2281 bind (difference_type_1 index) const
2282 {
2283 return bindAt(M, index);
2284 }
2285
2286 /** \brief Create a lower dimensional view to the chunked array.
2287
2288 Dimension 'N-1' is bound at 'index', all other dimensions remain
2289 unchanged. All chunks intersecting the view remain active
2290 throughout the view's lifetime.
2291 */
2293 bindOuter (difference_type_1 index) const
2294 {
2295 return bindAt(N-1, index);
2296 }
2297
2298 /** \brief Create a lower dimensional view to the chunked array.
2299
2300 The M rightmost dimensions are bound to the indices given in 'd'.
2301 All chunks intersecting the view remain active throughout the view's lifetime.
2302 */
2303 template <int M, class Index>
2305 bindOuter(const TinyVector <Index, M> & d) const
2306 {
2307 return bindAt(N-1, d[M-1]).bindOuter(d.dropIndex(M-1));
2308 }
2309
2310 // terminate the recursion of the above function
2311 template <class Index>
2313 bindOuter(const TinyVector <Index, 1> & d) const
2314 {
2315 return bindAt(N-1, d[0]);
2316 }
2317
2318 /** \brief Create a lower dimensional view to the chunked array.
2319
2320 Dimension '0' is bound at 'index', all other dimensions remain
2321 unchanged. All chunks intersecting the view remain active
2322 throughout the view's lifetime.
2323 */
2324 MultiArrayView <N-1, T, ChunkedArrayTag>
2325 bindInner (difference_type_1 index) const
2326 {
2327 return bindAt(0, index);
2328 }
2329
2330 /** \brief Create a lower dimensional view to the chunked array.
2331
2332 The M leftmost dimensions are bound to the indices given in 'd'.
2333 All chunks intersecting the view remain active throughout the view's lifetime.
2334 */
2335 template <int M, class Index>
2337 bindInner(const TinyVector <Index, M> & d) const
2338 {
2339 return bindAt(0, d[0]).bindInner(d.dropIndex(0));
2340 }
2341
2342 // terminate the recursion of the above function
2343 template <class Index>
2345 bindInner(const TinyVector <Index, 1> & d) const
2346 {
2347 return bindAt(0, d[0]);
2348 }
2349
2350 /** \brief Get the number of chunks the cache will hold.
2351
2352 If there are any inactive chunks in the cache, these will be
2353 sent asleep until the max cahce size is reached. The max cache
2354 size may be temporarily overridden when more chunks need
2355 to be active simultaneously.
2356 */
2357 std::size_t cacheMaxSize() const
2358 {
2359 if(cache_max_size_ < 0)
2360 const_cast<int &>(cache_max_size_) = detail::defaultCacheSize(this->chunkArrayShape());
2361 return cache_max_size_;
2362 }
2363
2364 /** \brief Set the number of chunks the cache will hold.
2365
2366 This should be big enough to hold all chunks that are frequently needed
2367 and must therefore be adopted to the application's access pattern.
2368 */
2369 void setCacheMaxSize(std::size_t c)
2370 {
2371 cache_max_size_ = c;
2372 if(c < cache_.size())
2373 {
2374 threading::lock_guard<threading::mutex> guard(*chunk_lock_);
2375 cleanCache();
2376 }
2377 }
2378
2379 /** \brief Create a scan-order iterator for the entire chunked array.
2380 */
2382 {
2383 return createCoupledIterator(*this);
2384 }
2385
2386 /** \brief Create the end iterator for scan-order iteration over
2387 the entire chunked array.
2388 */
2390 {
2391 return begin().getEndIterator();
2392 }
2393
2394 /** \brief Create a read-only scan-order iterator for the entire
2395 chunked array.
2396 */
2398 {
2399 return createCoupledIterator(const_cast<ChunkedArray const &>(*this));
2400 }
2401
2402 /** \brief Create the end iterator for read-only scan-order iteration over
2403 the entire chunked array.
2404 */
2406 {
2407 return cbegin().getEndIterator();
2408 }
2409
2410 /** \brief Create a read-only scan-order iterator for the entire
2411 chunked array.
2412 */
2414 {
2415 return createCoupledIterator(*this);
2416 }
2417
2418 /** \brief Create the end iterator for read-only scan-order iteration over
2419 the entire chunked array.
2420 */
2422 {
2423 return begin().getEndIterator();
2424 }
2425
2426 /** \brief Create an iterator over all chunks intersected by the given ROI.
2427 */
2428 chunk_iterator chunk_begin(shape_type const & start, shape_type const & stop)
2429 {
2430 checkSubarrayBounds(start, stop, "ChunkedArray::chunk_begin()");
2431 return chunk_iterator(this, start, stop, chunkStart(start), chunkStop(stop), this->chunk_shape_);
2432 }
2433
2434 /** \brief Create the end iterator for iteration over all chunks
2435 intersected by the given ROI.
2436 */
2437 chunk_iterator chunk_end(shape_type const & start, shape_type const & stop)
2438 {
2439 return chunk_begin(start, stop).getEndIterator();
2440 }
2441
2442 /** \brief Create a read-only iterator over all chunks intersected
2443 by the given ROI.
2444 */
2445 chunk_const_iterator chunk_begin(shape_type const & start, shape_type const & stop) const
2446 {
2447 checkSubarrayBounds(start, stop, "ChunkedArray::chunk_begin()");
2448 return chunk_const_iterator(this, start, stop, chunkStart(start), chunkStop(stop), this->chunk_shape_);
2449 }
2450
2451 /** \brief Create the end iterator for read-only iteration over all chunks
2452 intersected by the given ROI.
2453 */
2454 chunk_const_iterator chunk_end(shape_type const & start, shape_type const & stop) const
2455 {
2456 return chunk_begin(start, stop).getEndIterator();
2457 }
2458
2459 /** \brief Create a read-only iterator over all chunks intersected
2460 by the given ROI.
2461 */
2462 chunk_const_iterator chunk_cbegin(shape_type const & start, shape_type const & stop) const
2463 {
2464 checkSubarrayBounds(start, stop, "ChunkedArray::chunk_cbegin()");
2465 return chunk_const_iterator(this, start, stop, chunkStart(start), chunkStop(stop), this->chunk_shape_);
2466 }
2467
2468 /** \brief Create the end iterator for read-only iteration over all chunks
2469 intersected by the given ROI.
2470 */
2471 chunk_const_iterator chunk_cend(shape_type const & start, shape_type const & stop) const
2472 {
2473 return chunk_cbegin(start, stop).getEndIterator();
2474 }
2475
2476 shape_type bits_, mask_;
2477 int cache_max_size_;
2478 VIGRA_SHARED_PTR<threading::mutex> chunk_lock_;
2479 CacheType cache_;
2480 Chunk fill_value_chunk_;
2481 Handle fill_value_handle_;
2482 value_type fill_value_;
2483 double fill_scalar_;
2484 MultiArray<N, Handle> handle_array_;
2485 std::size_t data_bytes_, overhead_bytes_;
2486};
2487
2488/** Returns a CoupledScanOrderIterator to simultaneously iterate over image m1 and its coordinates.
2489 */
2490template <unsigned int N, class T>
2491typename ChunkedArray<N, T>::iterator
2492createCoupledIterator(ChunkedArray<N, T> & m)
2493{
2494 typedef typename ChunkedArray<N, T>::iterator IteratorType;
2495 typedef typename IteratorType::handle_type P1;
2496 typedef typename P1::base_type P0;
2497
2498 return IteratorType(P1(m,
2499 P0(m.shape())));
2500}
2501
2502template <unsigned int N, class T>
2503typename ChunkedArray<N, T>::const_iterator
2504createCoupledIterator(ChunkedArray<N, T> const & m)
2505{
2506 typedef typename ChunkedArray<N, T>::const_iterator IteratorType;
2507 typedef typename IteratorType::handle_type P1;
2508 typedef typename P1::base_type P0;
2509
2510 return IteratorType(P1(m,
2511 P0(m.shape())));
2512}
2513
2514/** \weakgroup ParallelProcessing
2515 \sa ChunkedArrayFull
2516*/
2517
2518/** Implement ChunkedArray as an ordinary MultiArray with a single chunk.
2519
2520 <b>\#include</b> <vigra/multi_array_chunked.hxx> <br/>
2521 Namespace: vigra
2522*/
2523template <unsigned int N, class T, class Alloc = std::allocator<T> >
2525: public ChunkedArray<N, T>,
2526 public MultiArray<N, T, Alloc>
2527{
2528 public:
2529
2531 typedef typename Storage::value_type value_type;
2532 typedef typename Storage::pointer pointer;
2533 typedef typename Storage::const_pointer const_pointer;
2534 typedef typename Storage::reference reference;
2535 typedef typename Storage::const_reference const_reference;
2537 typedef typename Storage::difference_type shape_type;
2538 typedef typename Storage::key_type key_type;
2539 typedef typename Storage::size_type size_type;
2540 typedef typename Storage::difference_type_1 difference_type_1;
2541 typedef typename Storage::iterator iterator;
2542 typedef typename Storage::const_iterator const_iterator;
2543 typedef typename Storage::view_type view_type;
2544
2545 typedef typename ChunkedArray<N, T>::Chunk Chunk;
2546
2547 static shape_type computeChunkShape(shape_type s)
2548 {
2549 for(unsigned k=0; k<N; ++k)
2550 s[k] = ceilPower2(s[k]);
2551 return s;
2552 }
2553
2554 using Storage::subarray;
2555 using Storage::bindOuter;
2556 using Storage::bindInner;
2557 using Storage::bind;
2558 using Storage::bindAt;
2559 using Storage::isInside;
2560 using Storage::shape;
2561 using Storage::size;
2562 using Storage::begin;
2563 using Storage::end;
2564
2565#ifndef DOXYGEN // doxygen doesn't understand this
2566 using Storage::operator==;
2567 using Storage::operator!=;
2568#endif
2569
2570 /** \brief Construct with given 'shape' and 'options', using the allocator
2571 'alloc' to manage the memory.
2572 */
2573 explicit ChunkedArrayFull(shape_type const & shape,
2574 ChunkedArrayOptions const & options = ChunkedArrayOptions(),
2575 Alloc const & alloc = Alloc())
2576 : ChunkedArray<N, T>(shape, computeChunkShape(shape), options.cacheMax(0)),
2577 Storage(shape, this->fill_value_, alloc),
2578 upper_bound_(shape),
2579 chunk_(detail::defaultStride(shape), this->data())
2580 {
2581 this->handle_array_[0].pointer_ = &chunk_;
2582 this->handle_array_[0].chunk_state_.store(1);
2583 this->data_bytes_ = size()*sizeof(T);
2584 this->overhead_bytes_ = overheadBytesPerChunk();
2585 }
2586
2588 : ChunkedArray<N, T>(rhs),
2589 Storage(rhs),
2590 upper_bound_(rhs.upper_bound_),
2591 chunk_(detail::defaultStride(shape), this->data())
2592 {
2593 this->handle_array_[0].pointer_ = &chunk_;
2594 this->handle_array_[0].chunk_state_.store(1);
2595 }
2596
2597 ChunkedArrayFull & operator=(ChunkedArrayFull const & rhs)
2598 {
2599 if(this != &rhs)
2600 {
2601 ChunkedArray<N, T>::operator=(rhs);
2602 Storage::operator=(rhs);
2603 upper_bound_ = rhs.upper_bound_;
2604 }
2605 return *this;
2606 }
2607
2608 ~ChunkedArrayFull()
2609 {}
2610
2612 {
2613 return shape_type(1);
2614 }
2615
2616 virtual pointer loadChunk(ChunkBase<N, T> **, shape_type const &)
2617 {
2618 return this->data();
2619 }
2620
2621 virtual bool unloadChunk(ChunkBase<N, T> *, bool /* destroy */)
2622 {
2623 return false; // never destroys the data
2624 }
2625
2626 virtual std::size_t dataBytes(Chunk *) const
2627 {
2628 return prod(this->shape());
2629 }
2630
2631 virtual std::size_t overheadBytesPerChunk() const
2632 {
2633 return sizeof(Chunk) + sizeof(SharedChunkHandle<N, T>);
2634 }
2635
2636 virtual pointer chunkForIterator(shape_type const & point,
2637 shape_type & strides, shape_type & upper_bound,
2638 IteratorChunkHandle<N, T> * h) const
2639 {
2640 shape_type global_point = point + h->offset_;
2641
2642 if(!this->isInside(global_point))
2643 {
2644 upper_bound = point + this->chunk_shape_;
2645 return 0;
2646 }
2647
2648 strides = this->stride();
2649 upper_bound = upper_bound_;
2650 return const_cast<pointer>(&Storage::operator[](global_point));
2651 }
2652
2653 virtual pointer chunkForIterator(shape_type const & point,
2654 shape_type & strides, shape_type & upper_bound,
2655 IteratorChunkHandle<N, T> * h)
2656 {
2657 shape_type global_point = point + h->offset_;
2658
2659 if(!this->isInside(global_point))
2660 {
2661 upper_bound = point + this->chunk_shape_;
2662 return 0;
2663 }
2664
2665 strides = this->stride();
2666 upper_bound = upper_bound_;
2667 return &Storage::operator[](global_point);
2668 }
2669
2670 virtual std::string backend() const
2671 {
2672 return "ChunkedArrayFull";
2673 }
2674
2675 shape_type upper_bound_;
2676 Chunk chunk_; // a dummy chunk to fulfill the API
2677};
2678
2679/** \weakgroup ParallelProcessing
2680 \sa ChunkedArrayLazy
2681*/
2682
2683/** Implement ChunkedArray as a collection of in-memory chunks.
2684
2685 This optimizes over an ordinary MultiArray by allocating chunks only
2686 upon the first write. This is especially useful when only a small
2687 part of the entire array is actually needed, e.g. in a data viewer.
2688
2689 <b>\#include</b> <vigra/multi_array_chunked.hxx> <br/>
2690 Namespace: vigra
2691*/
2692template <unsigned int N, class T, class Alloc = std::allocator<T> >
2694: public ChunkedArray<N, T>
2695{
2696 public:
2697
2698 class Chunk
2699 : public ChunkBase<N, T>
2700 {
2701 public:
2702 typedef typename MultiArrayShape<N>::type shape_type;
2703 typedef T value_type;
2704 typedef value_type * pointer;
2705 typedef value_type & reference;
2706
2707 Chunk(shape_type const & shape, Alloc const & alloc = Alloc())
2708 : ChunkBase<N, T>(detail::defaultStride(shape))
2709 , size_(prod(shape))
2710 , alloc_(alloc)
2711 {}
2712
2713 ~Chunk()
2714 {
2715 deallocate();
2716 }
2717
2718 pointer allocate()
2719 {
2720 if(this->pointer_ == 0)
2721 this->pointer_ = detail::alloc_initialize_n<T>(size_, T(), alloc_);
2722 return this->pointer_;
2723 }
2724
2725 void deallocate()
2726 {
2727 detail::destroy_dealloc_n(this->pointer_, size_, alloc_);
2728 this->pointer_ = 0;
2729 }
2730
2731 MultiArrayIndex size_;
2732 Alloc alloc_;
2733
2734 private:
2735 Chunk & operator=(Chunk const &);
2736 };
2737
2740 typedef T value_type;
2741 typedef value_type * pointer;
2742 typedef value_type & reference;
2743
2744 /** \brief Construct with given 'shape', 'chunk_shape' and 'options',
2745 using the allocator 'alloc' to manage the memory.
2746 */
2747 explicit ChunkedArrayLazy(shape_type const & shape,
2748 shape_type const & chunk_shape=shape_type(),
2749 ChunkedArrayOptions const & options = ChunkedArrayOptions(),
2750 Alloc const & alloc = Alloc())
2751 : ChunkedArray<N, T>(shape, chunk_shape, options.cacheMax(0))
2752 , alloc_(alloc)
2753 {}
2754
2756 {
2757 typename ChunkStorage::iterator i = this->handle_array_.begin(),
2758 end = this->handle_array_.end();
2759 for(; i != end; ++i)
2760 {
2761 if(i->pointer_)
2762 delete static_cast<Chunk*>(i->pointer_);
2763 i->pointer_ = 0;
2764 }
2765 }
2766
2767 virtual pointer loadChunk(ChunkBase<N, T> ** p, shape_type const & index)
2768 {
2769 if(*p == 0)
2770 {
2771 *p = new Chunk(this->chunkShape(index));
2772 this->overhead_bytes_ += sizeof(Chunk);
2773 }
2774 return static_cast<Chunk *>(*p)->allocate();
2775 }
2776
2777 virtual bool unloadChunk(ChunkBase<N, T> * chunk, bool destroy)
2778 {
2779 if(destroy)
2780 static_cast<Chunk *>(chunk)->deallocate();
2781 return destroy;
2782 }
2783
2784 virtual std::string backend() const
2785 {
2786 return "ChunkedArrayLazy";
2787 }
2788
2789 virtual std::size_t dataBytes(ChunkBase<N,T> * c) const
2790 {
2791 return c->pointer_ == 0
2792 ? 0
2793 : static_cast<Chunk*>(c)->size_*sizeof(T);
2794 }
2795
2796 virtual std::size_t overheadBytesPerChunk() const
2797 {
2798 return sizeof(Chunk) + sizeof(SharedChunkHandle<N, T>);
2799 }
2800
2801 Alloc alloc_;
2802};
2803
2804/** \weakgroup ParallelProcessing
2805 \sa ChunkedArrayCompressed
2806*/
2807
2808/** Implement ChunkedArray as a collection of potentially compressed
2809 in-memory chunks.
2810
2811 This works like \ref ChunkedArrayLazy, but inactive chunks are compressed
2812 when sent asleep. This is especially appropriate for highly compressible
2813 data such as label images.
2814
2815 <b>\#include</b> <vigra/multi_array_chunked.hxx> <br/>
2816 Namespace: vigra
2817*/
2818template <unsigned int N, class T, class Alloc = std::allocator<T> >
2820: public ChunkedArray<N, T>
2821{
2822 public:
2823
2824 class Chunk
2825 : public ChunkBase<N, T>
2826 {
2827 public:
2828 typedef typename MultiArrayShape<N>::type shape_type;
2829 typedef T value_type;
2830 typedef value_type * pointer;
2831 typedef value_type & reference;
2832
2833 Chunk(shape_type const & shape)
2834 : ChunkBase<N, T>(detail::defaultStride(shape))
2835 , compressed_()
2836 , size_(prod(shape))
2837 {}
2838
2839 ~Chunk()
2840 {
2841 deallocate();
2842 }
2843
2844 pointer allocate()
2845 {
2846 if(this->pointer_ == 0)
2847 this->pointer_ = detail::alloc_initialize_n<T>(size_, T(), alloc_);
2848 return this->pointer_;
2849 }
2850
2851 void deallocate()
2852 {
2853 detail::destroy_dealloc_n(this->pointer_, size_, alloc_);
2854 this->pointer_ = 0;
2855 compressed_.clear();
2856 }
2857
2858 void compress(CompressionMethod method)
2859 {
2860 if(this->pointer_ != 0)
2861 {
2862 vigra_invariant(compressed_.size() == 0,
2863 "ChunkedArrayCompressed::Chunk::compress(): compressed and uncompressed pointer are both non-zero.");
2864
2865 ::vigra::compress((char const *)this->pointer_, size_*sizeof(T), compressed_, method);
2866
2867 // std::cerr << "compression ratio: " << double(compressed_.size())/(this->size()*sizeof(T)) << "\n";
2868 detail::destroy_dealloc_n(this->pointer_, size_, alloc_);
2869 this->pointer_ = 0;
2870 }
2871 }
2872
2873 pointer uncompress(CompressionMethod method)
2874 {
2875 if(this->pointer_ == 0)
2876 {
2877 if(compressed_.size())
2878 {
2879 this->pointer_ = alloc_.allocate((typename Alloc::size_type)size_);
2880
2881 ::vigra::uncompress(compressed_.data(), compressed_.size(),
2882 (char*)this->pointer_, size_*sizeof(T), method);
2883 compressed_.clear();
2884 }
2885 else
2886 {
2887 this->pointer_ = allocate();
2888 }
2889 }
2890 else
2891 {
2892 vigra_invariant(compressed_.size() == 0,
2893 "ChunkedArrayCompressed::Chunk::uncompress(): compressed and uncompressed pointer are both non-zero.");
2894 }
2895 return this->pointer_;
2896 }
2897
2898 ArrayVector<char> compressed_;
2899 MultiArrayIndex size_;
2900 Alloc alloc_;
2901
2902 private:
2903 Chunk & operator=(Chunk const &);
2904 };
2905
2908 typedef T value_type;
2909 typedef value_type * pointer;
2910 typedef value_type & reference;
2911
2912 /** \brief Construct with given 'shape', 'chunk_shape' and 'options'.
2913
2914 The most important option concerns the compression algorithm. Supported
2915 algorithms are:
2916 <ul>
2917 <li>LZ4: Very fast algorithm that achieves decent compression ratios.
2918 <li>ZLIB_FAST: Fast compression using 'zlib' (slower than LZ4, but higher compression).
2919 <li>ZLIB_BEST: Best compression using 'zlib', slow.
2920 <li>ZLIB_NONE: Use 'zlib' format without compression.
2921 <li>DEFAULT_COMPRESSION: Same as LZ4.
2922 </ul>
2923 */
2924 explicit ChunkedArrayCompressed(shape_type const & shape,
2925 shape_type const & chunk_shape=shape_type(),
2926 ChunkedArrayOptions const & options = ChunkedArrayOptions())
2927 : ChunkedArray<N, T>(shape, chunk_shape, options),
2928 compression_method_(options.compression_method)
2929 {
2930 if(compression_method_ == DEFAULT_COMPRESSION)
2931 compression_method_ = LZ4;
2932 }
2933
2935 {
2936 typename ChunkStorage::iterator i = this->handle_array_.begin(),
2937 end = this->handle_array_.end();
2938 for(; i != end; ++i)
2939 {
2940 if(i->pointer_)
2941 delete static_cast<Chunk*>(i->pointer_);
2942 i->pointer_ = 0;
2943 }
2944 }
2945
2946 virtual pointer loadChunk(ChunkBase<N, T> ** p, shape_type const & index)
2947 {
2948 if(*p == 0)
2949 {
2950 *p = new Chunk(this->chunkShape(index));
2951 this->overhead_bytes_ += sizeof(Chunk);
2952 }
2953 return static_cast<Chunk *>(*p)->uncompress(compression_method_);
2954 }
2955
2956 virtual bool unloadChunk(ChunkBase<N, T> * chunk, bool destroy)
2957 {
2958 if(destroy)
2959 static_cast<Chunk *>(chunk)->deallocate();
2960 else
2961 static_cast<Chunk *>(chunk)->compress(compression_method_);
2962 return destroy;
2963 }
2964
2965 virtual std::string backend() const
2966 {
2967 switch(compression_method_)
2968 {
2969 case ZLIB:
2970 return "ChunkedArrayCompressed<ZLIB>";
2971 case ZLIB_NONE:
2972 return "ChunkedArrayCompressed<ZLIB_NONE>";
2973 case ZLIB_FAST:
2974 return "ChunkedArrayCompressed<ZLIB_FAST>";
2975 case ZLIB_BEST:
2976 return "ChunkedArrayCompressed<ZLIB_BEST>";
2977 case LZ4:
2978 return "ChunkedArrayCompressed<LZ4>";
2979 default:
2980 return "unknown";
2981 }
2982 }
2983
2984 virtual std::size_t dataBytes(ChunkBase<N,T> * c) const
2985 {
2986 return c->pointer_ == 0
2987 ? static_cast<Chunk*>(c)->compressed_.size()
2988 : static_cast<Chunk*>(c)->size_*sizeof(T);
2989 }
2990
2991 virtual std::size_t overheadBytesPerChunk() const
2992 {
2993 return sizeof(Chunk) + sizeof(SharedChunkHandle<N, T>);
2994 }
2995
2996 CompressionMethod compression_method_;
2997};
2998
2999/** \weakgroup ParallelProcessing
3000 \sa ChunkedArrayTmpFile
3001*/
3002
3003/** Implement ChunkedArray as a collection of chunks that can be
3004 swapped out into a temporary file when asleep.
3005
3006 <b>\#include</b> <vigra/multi_array_chunked.hxx> <br/>
3007 Namespace: vigra
3008
3009 The present implementation uses a memory-mapped sparse file to store the chunks.
3010 A sparse file is created on Linux using the O_TRUNC flag (this seems to be
3011 the default file behavior on Linux anyway), and on Windows by
3012 calling DeviceIoControl(file_handle, FSCTL_SET_SPARSE,...) after file creation.
3013
3014 The file is automatically deleted upon closing. On Windows, this happens
3015 because the file was opened with FILE_FLAG_DELETE_ON_CLOSE in combination
3016 with the flag FILE_ATTRIBUTE_TEMPORARY, which tells the OS to avoid writing
3017 the file to disk if possible. (However, judging from the timings,
3018 something is still written, or cleanup takes considerable time.)
3019 On Linux, automated deletion is achieved via <tt>fileno(tmpfile())</tt>.
3020*/
3021template <unsigned int N, class T>
3023: public ChunkedArray<N, T>
3024{
3025 /* REMARKS
3026
3027 Alternatives are:
3028 * Don't create a file explicitly, but use the swap file instead. This is
3029 achieved on Linux by mmap(..., MAP_PRIVATE | MAP_ANONYMOUS, -1, ...),
3030 on Windows by calling CreateFileMapping(INVALID_HANDLE_VALUE, ...).
3031 * On Linux, the memory must not be unmapped because this
3032 looses the data. In fact, anonymous mmap() is very similar to
3033 malloc(), and there is probably no good reason to use anonymous mmap().
3034 * On Windows, this is much faster, because the OS will never try to
3035 actually write something to disk (unless swapping is necessary).
3036 */
3037 public:
3038#ifdef _WIN32
3039 typedef HANDLE FileHandle;
3040#else
3041 typedef int FileHandle;
3042#endif
3043
3044 class Chunk
3045 : public ChunkBase<N, T>
3046 {
3047 public:
3048 typedef typename MultiArrayShape<N>::type shape_type;
3049 typedef T value_type;
3050 typedef value_type * pointer;
3051 typedef value_type & reference;
3052
3053 Chunk(shape_type const & shape,
3054 std::size_t offset, size_t alloc_size,
3055 FileHandle file)
3056 : ChunkBase<N, T>(detail::defaultStride(shape))
3057 , offset_(offset)
3058 , alloc_size_(alloc_size)
3059 , file_(file)
3060 {}
3061
3062 ~Chunk()
3063 {
3064 unmap();
3065 }
3066
3067 pointer map()
3068 {
3069 if(this->pointer_ == 0)
3070 {
3071 #ifdef _WIN32
3072 static const std::size_t bits = sizeof(DWORD)*8,
3073 mask = (std::size_t(1) << bits) - 1;
3074 this->pointer_ = (pointer)MapViewOfFile(file_, FILE_MAP_ALL_ACCESS,
3075 std::size_t(offset_) >> bits, offset_ & mask, alloc_size_);
3076 if(this->pointer_ == 0)
3077 winErrorToException("ChunkedArrayChunk::map(): ");
3078 #else
3079 this->pointer_ = (pointer)mmap(0, alloc_size_, PROT_READ | PROT_WRITE, MAP_SHARED,
3080 file_, offset_);
3081 if(this->pointer_ == 0)
3082 throw std::runtime_error("ChunkedArrayChunk::map(): mmap() failed.");
3083 #endif
3084 }
3085 return this->pointer_;
3086 }
3087
3088 void unmap()
3089 {
3090 if(this->pointer_ != 0)
3091 {
3092 #ifdef _WIN32
3093 ::UnmapViewOfFile(this->pointer_);
3094 #else
3095 munmap(this->pointer_, alloc_size_);
3096 #endif
3097 this->pointer_ = 0;
3098 }
3099 }
3100
3101 std::size_t offset_, alloc_size_;
3102 FileHandle file_;
3103
3104 private:
3105 Chunk & operator=(Chunk const &);
3106 };
3107
3111 typedef T value_type;
3112 typedef value_type * pointer;
3113 typedef value_type & reference;
3114
3115 static std::size_t computeAllocSize(shape_type const & shape)
3116 {
3117 std::size_t size = prod(shape)*sizeof(T);
3118 std::size_t mask = mmap_alignment - 1;
3119 return (size + mask) & ~mask;
3120 }
3121
3122 /** \brief Construct with given 'shape', 'chunk_shape' and 'options'.
3123
3124 If the optional 'path' is given, the file is created in this directory.
3125 Otherwise (default), the path specified by the $TMP or $TEMP environment
3126 variables (in that order) is used.
3127 */
3128 explicit ChunkedArrayTmpFile(shape_type const & shape,
3129 shape_type const & chunk_shape=shape_type(),
3130 ChunkedArrayOptions const & options = ChunkedArrayOptions(),
3131 std::string const & path = "")
3132 : ChunkedArray<N, T>(shape, chunk_shape, options)
3133 #ifndef VIGRA_NO_SPARSE_FILE
3134 , offset_array_(this->chunkArrayShape())
3135 #endif
3136 , file_size_()
3137 , file_capacity_()
3138 {
3139 ignore_argument(path);
3140 #ifdef VIGRA_NO_SPARSE_FILE
3141 file_capacity_ = 4*prod(this->chunk_shape_)*sizeof(T);
3142 #else
3143 // compute offset in file
3144 typename OffsetStorage::iterator i = offset_array_.begin(),
3145 end = offset_array_.end();
3146 std::size_t size = 0;
3147 for(; i != end; ++i)
3148 {
3149 *i = size;
3150 size += computeAllocSize(this->chunkShape(i.point()));
3151 }
3152 file_capacity_ = size;
3153 this->overhead_bytes_ += offset_array_.size()*sizeof(std::size_t);
3154 // std::cerr << " file size: " << size << "\n";
3155 #endif
3156
3157 #ifdef _WIN32
3158 // create a temp file
3159 file_ = ::CreateFile(winTempFileName(path).c_str(), GENERIC_READ | GENERIC_WRITE,
3160 0, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE, NULL);
3161 if (file_ == INVALID_HANDLE_VALUE)
3162 winErrorToException("ChunkedArrayTmpFile(): ");
3163
3164 // make it a sparse file
3165 DWORD dwTemp;
3166 if(!::DeviceIoControl(file_, FSCTL_SET_SPARSE, NULL, 0, NULL, 0, &dwTemp, NULL))
3167 winErrorToException("ChunkedArrayTmpFile(): ");
3168
3169 // place the data in the swap file
3170 // file_ = INVALID_HANDLE_VALUE;
3171
3172 // resize and memory-map the file
3173 static const std::size_t bits = sizeof(LONG)*8, mask = (std::size_t(1) << bits) - 1;
3174 mappedFile_ = CreateFileMapping(file_, NULL, PAGE_READWRITE,
3175 file_capacity_ >> bits, file_capacity_ & mask, NULL);
3176 if(!mappedFile_)
3177 winErrorToException("ChunkedArrayTmpFile(): ");
3178 #else
3179 mappedFile_ = file_ = fileno(tmpfile());
3180 if(file_ == -1)
3181 throw std::runtime_error("ChunkedArrayTmpFile(): unable to open file.");
3182 lseek(file_, file_capacity_-1, SEEK_SET);
3183 if(write(file_, "0", 1) == -1)
3184 throw std::runtime_error("ChunkedArrayTmpFile(): unable to resize file.");
3185 #endif
3186 }
3187
3189 {
3190 typename ChunkStorage::iterator i = this->handle_array_.begin(),
3191 end = this->handle_array_.end();
3192 for(; i != end; ++i)
3193 {
3194 if(i->pointer_)
3195 delete static_cast<Chunk*>(i->pointer_);
3196 i->pointer_ = 0;
3197 }
3198 #ifdef _WIN32
3199 ::CloseHandle(mappedFile_);
3200 ::CloseHandle(file_);
3201 #else
3202 ::close(file_);
3203 #endif
3204 }
3205
3206 virtual pointer loadChunk(ChunkBase<N, T> ** p, shape_type const & index)
3207 {
3208 if(*p == 0)
3209 {
3210 shape_type shape = this->chunkShape(index);
3211 std::size_t chunk_size = computeAllocSize(shape);
3212 #ifdef VIGRA_NO_SPARSE_FILE
3213 std::size_t offset = file_size_;
3214 if(offset + chunk_size > file_capacity_)
3215 {
3216 file_capacity_ = max<std::size_t>(offset+chunk_size, file_capacity_ * 120 / 100); // extend file by 20%
3217 if(lseek(file_, file_capacity_-1, SEEK_SET) == -1)
3218 throw std::runtime_error("ChunkedArrayTmpFile(): unable to reset file size.");
3219 if(write(file_, "0", 1) == -1)
3220 throw std::runtime_error("ChunkedArrayTmpFile(): unable to resize file.");
3221 }
3222 file_size_ += chunk_size;
3223 #else
3224 std::size_t offset = offset_array_[index];
3225 #endif
3226 *p = new Chunk(shape, offset, chunk_size, mappedFile_);
3227 this->overhead_bytes_ += sizeof(Chunk);
3228 }
3229 return static_cast<Chunk*>(*p)->map();
3230 }
3231
3232 virtual bool unloadChunk(ChunkBase<N, T> * chunk, bool /* destroy*/)
3233 {
3234 static_cast<Chunk *>(chunk)->unmap();
3235 return false; // never destroys the data
3236 }
3237
3238 virtual std::string backend() const
3239 {
3240 return "ChunkedArrayTmpFile";
3241 }
3242
3243 virtual std::size_t dataBytes(ChunkBase<N,T> * c) const
3244 {
3245 return c->pointer_ == 0
3246 ? 0
3247 : static_cast<Chunk*>(c)->alloc_size_;
3248 }
3249
3250 virtual std::size_t overheadBytesPerChunk() const
3251 {
3252 #ifdef VIGRA_NO_SPARSE_FILE
3253 return sizeof(Chunk) + sizeof(SharedChunkHandle<N, T>);
3254 #else
3255 return sizeof(Chunk) + sizeof(SharedChunkHandle<N, T>) + sizeof(std::size_t);
3256 #endif
3257 }
3258
3259 #ifndef VIGRA_NO_SPARSE_FILE
3260 OffsetStorage offset_array_; // the array of chunks
3261 #endif
3262 FileHandle file_, mappedFile_; // the file back-end
3263 std::size_t file_size_, file_capacity_;
3264};
3265
3266template<unsigned int N, class U>
3267class ChunkIterator
3268: public MultiCoordinateIterator<N>
3269, private MultiArrayView<N, typename UnqualifiedType<U>::type>
3270{
3271 public:
3272 typedef typename UnqualifiedType<U>::type T;
3273 typedef MultiCoordinateIterator<N> base_type;
3274 typedef MultiArrayView<N, T> base_type2;
3275
3276 typedef typename base_type::shape_type shape_type;
3277 typedef typename base_type::difference_type difference_type;
3278 typedef ChunkIterator iterator;
3279 typedef std::random_access_iterator_tag iterator_category;
3280
3281 typedef MultiArrayView<N, T> value_type;
3282 typedef MultiArrayView<N, T> & reference;
3283 typedef MultiArrayView<N, T> const & const_reference;
3284 typedef MultiArrayView<N, T> * pointer;
3285 typedef MultiArrayView<N, T> const * const_pointer;
3286
3287 typedef typename IfBool<UnqualifiedType<U>::isConst,
3288 ChunkedArrayBase<N, T> const,
3289 ChunkedArrayBase<N, T> >::type array_type;
3290 typedef IteratorChunkHandle<N, T> Chunk;
3291
3292
3293 ChunkIterator()
3294 : base_type()
3295 , base_type2()
3296 {}
3297
3298 ChunkIterator(array_type * array,
3299 shape_type const & start, shape_type const & end,
3300 shape_type const & chunk_start, shape_type const & chunk_end,
3301 shape_type const & chunk_shape)
3302 : base_type(chunk_start, chunk_end)
3303 , array_(array)
3304 , chunk_(chunk_start * chunk_shape)
3305 , start_(start - chunk_.offset_)
3306 , stop_(end - chunk_.offset_)
3307 , chunk_shape_(chunk_shape)
3308 {
3309 getChunk();
3310 }
3311
3312 ChunkIterator(ChunkIterator const & rhs)
3313 : base_type(rhs)
3314 , base_type2(rhs)
3315 , array_(rhs.array_)
3316 , chunk_(rhs.chunk_)
3317 , start_(rhs.start_)
3318 , stop_(rhs.stop_)
3319 , chunk_shape_(rhs.chunk_shape_)
3320 {
3321 getChunk();
3322 }
3323
3324 ChunkIterator & operator=(ChunkIterator const & rhs)
3325 {
3326 if(this != &rhs)
3327 {
3328 base_type::operator=(rhs);
3329 array_ = rhs.array_;
3330 chunk_ = rhs.chunk_;
3331 start_ = rhs.start_;
3332 stop_ = rhs.stop_;
3333 chunk_shape_ = rhs.chunk_shape_;
3334 getChunk();
3335 }
3336 return *this;
3337 }
3338
3339 reference operator*()
3340 {
3341 return *this;
3342 }
3343
3344 const_reference operator*() const
3345 {
3346 return *this;
3347 }
3348
3349 pointer operator->()
3350 {
3351 return this;
3352 }
3353
3354 const_pointer operator->() const
3355 {
3356 return this;
3357 }
3358
3359 value_type operator[](MultiArrayIndex i) const
3360 {
3361 return *(ChunkIterator(*this) += i);
3362 }
3363
3364 value_type operator[](const shape_type &coordOffset) const
3365 {
3366 return *(ChunkIterator(*this) += coordOffset);
3367 }
3368
3369 void getChunk()
3370 {
3371 if(array_)
3372 {
3373 shape_type array_point = max(start_, this->point()*chunk_shape_),
3374 upper_bound(SkipInitialization);
3375 this->m_ptr = array_->chunkForIterator(array_point, this->m_stride, upper_bound, &chunk_);
3376 this->m_shape = min(upper_bound, stop_) - array_point;
3377 }
3378 }
3379
3380 shape_type chunkStart() const
3381 {
3382 return max(start_, this->point()*chunk_shape_) + chunk_.offset_;
3383 }
3384
3385 shape_type chunkStop() const
3386 {
3387 return chunkStart() + this->m_shape;
3388 }
3389
3390 ChunkIterator & operator++()
3391 {
3392 base_type::operator++();
3393 getChunk();
3394 return *this;
3395 }
3396
3397 ChunkIterator operator++(int)
3398 {
3399 ChunkIterator res(*this);
3400 ++*this;
3401 return res;
3402 }
3403
3404 ChunkIterator & operator+=(MultiArrayIndex i)
3405 {
3406 base_type::operator+=(i);
3407 getChunk();
3408 return *this;
3409 }
3410
3411 ChunkIterator & operator+=(const shape_type &coordOffset)
3412 {
3413 base_type::operator+=(coordOffset);
3414 getChunk();
3415 return *this;
3416 }
3417
3418 ChunkIterator & operator--()
3419 {
3420 base_type::operator--();
3421 getChunk();
3422 return *this;
3423 }
3424
3425 ChunkIterator operator--(int)
3426 {
3427 ChunkIterator res(*this);
3428 --*this;
3429 return res;
3430 }
3431
3432 ChunkIterator & operator-=(MultiArrayIndex i)
3433 {
3434 return operator+=(-i);
3435 }
3436
3437 ChunkIterator & operator-=(const shape_type &coordOffset)
3438 {
3439 return operator+=(-coordOffset);
3440 }
3441
3442 ChunkIterator getEndIterator() const
3443 {
3444 ChunkIterator res(*this);
3445 static_cast<base_type &>(res) = base_type::getEndIterator();
3446 res.getChunk();
3447 return res;
3448 }
3449
3450 ChunkIterator operator+(MultiArrayIndex d) const
3451 {
3452 return ChunkIterator(*this) += d;
3453 }
3454
3455 ChunkIterator operator-(MultiArrayIndex d) const
3456 {
3457 return ChunkIterator(*this) -= d;
3458 }
3459
3460 ChunkIterator operator+(const shape_type &coordOffset) const
3461 {
3462 return ChunkIterator(*this) += coordOffset;
3463 }
3464
3465 ChunkIterator operator-(const shape_type &coordOffset) const
3466 {
3467 return ChunkIterator(*this) -= coordOffset;
3468 }
3469
3470 MultiArrayIndex operator-(const ChunkIterator & other) const
3471 {
3472 return base_type::operator-(other);
3473 }
3474
3475#ifndef DOXYGEN // doxygen doesn't understand this
3476 using base_type::operator==;
3477 using base_type::operator!=;
3478#endif
3479 using base_type::shape;
3480
3481 array_type * array_;
3482 Chunk chunk_;
3483 shape_type start_, stop_, chunk_shape_, array_point_;
3484};
3485
3486//@}
3487
3488} // namespace vigra
3489
3490#undef VIGRA_ASSERT_INSIDE
3491
3492#endif /* VIGRA_MULTI_ARRAY_CHUNKED_HXX */
Definition array_vector.hxx:514
Definition multi_array_chunked.hxx:2821
ChunkedArrayCompressed(shape_type const &shape, shape_type const &chunk_shape=shape_type(), ChunkedArrayOptions const &options=ChunkedArrayOptions())
Construct with given 'shape', 'chunk_shape' and 'options'.
Definition multi_array_chunked.hxx:2924
virtual std::size_t overheadBytesPerChunk() const
Bytes of main memory needed to manage a single chunk.
Definition multi_array_chunked.hxx:2991
Definition multi_array_chunked.hxx:2527
virtual shape_type chunkArrayShape() const
Number of chunks along each coordinate direction.
Definition multi_array_chunked.hxx:2611
ChunkedArrayFull(shape_type const &shape, ChunkedArrayOptions const &options=ChunkedArrayOptions(), Alloc const &alloc=Alloc())
Construct with given 'shape' and 'options', using the allocator 'alloc' to manage the memory.
Definition multi_array_chunked.hxx:2573
virtual std::size_t overheadBytesPerChunk() const
Bytes of main memory needed to manage a single chunk.
Definition multi_array_chunked.hxx:2631
Definition multi_array_chunked.hxx:2695
virtual std::size_t overheadBytesPerChunk() const
Bytes of main memory needed to manage a single chunk.
Definition multi_array_chunked.hxx:2796
ChunkedArrayLazy(shape_type const &shape, shape_type const &chunk_shape=shape_type(), ChunkedArrayOptions const &options=ChunkedArrayOptions(), Alloc const &alloc=Alloc())
Construct with given 'shape', 'chunk_shape' and 'options', using the allocator 'alloc' to manage the ...
Definition multi_array_chunked.hxx:2747
Option object for ChunkedArray construction.
Definition multi_array_chunked.hxx:1268
ChunkedArrayOptions & cacheMax(int v)
Maximum number of chunks in the cache.
Definition multi_array_chunked.hxx:1297
ChunkedArrayOptions & compression(CompressionMethod v)
Compress inactive chunks with the given method.
Definition multi_array_chunked.hxx:1312
ChunkedArrayOptions & fillValue(double v)
Element value for read-only access of uninitialized chunks.
Definition multi_array_chunked.hxx:1282
ChunkedArrayOptions()
Initialize options with defaults.
Definition multi_array_chunked.hxx:1272
Definition multi_array_chunked.hxx:3024
ChunkedArrayTmpFile(shape_type const &shape, shape_type const &chunk_shape=shape_type(), ChunkedArrayOptions const &options=ChunkedArrayOptions(), std::string const &path="")
Construct with given 'shape', 'chunk_shape' and 'options'.
Definition multi_array_chunked.hxx:3128
virtual std::size_t overheadBytesPerChunk() const
Bytes of main memory needed to manage a single chunk.
Definition multi_array_chunked.hxx:3250
Interface and base class for chunked arrays.
Definition multi_fwd.hxx:137
void releaseChunks(shape_type const &start, shape_type const &stop, bool destroy=false)
Definition multi_array_chunked.hxx:2051
void setCacheMaxSize(std::size_t c)
Set the number of chunks the cache will hold.
Definition multi_array_chunked.hxx:2369
shape_type const & shape() const
Return the shape in this array.
MultiArrayIndex size() const
Return the number of elements in this array.
const_view_type const_subarray(shape_type const &start, shape_type const &stop) const
Create a read-only view to the specified ROI.
Definition multi_array_chunked.hxx:2208
const_iterator begin() const
Create a read-only scan-order iterator for the entire chunked array.
Definition multi_array_chunked.hxx:2413
chunk_const_iterator chunk_begin(shape_type const &start, shape_type const &stop) const
Create a read-only iterator over all chunks intersected by the given ROI.
Definition multi_array_chunked.hxx:2445
const_iterator cbegin() const
Create a read-only scan-order iterator for the entire chunked array.
Definition multi_array_chunked.hxx:2397
MultiArrayView< N-1, T, ChunkedArrayTag > bindOuter(difference_type_1 index) const
Create a lower dimensional view to the chunked array.
Definition multi_array_chunked.hxx:2293
chunk_const_iterator chunk_cbegin(shape_type const &start, shape_type const &stop) const
Create a read-only iterator over all chunks intersected by the given ROI.
Definition multi_array_chunked.hxx:2462
MultiArrayView< N-M, T, ChunkedArrayTag > bindOuter(const TinyVector< Index, M > &d) const
Create a lower dimensional view to the chunked array.
Definition multi_array_chunked.hxx:2305
virtual shape_type chunkArrayShape() const
Number of chunks along each coordinate direction.
Definition multi_array_chunked.hxx:1688
const_view_type subarray(shape_type const &start, shape_type const &stop) const
Create a read-only view to the specified ROI.
Definition multi_array_chunked.hxx:2194
std::size_t dataBytes() const
Bytes of main memory occupied by the array's data.
Definition multi_array_chunked.hxx:1674
chunk_const_iterator chunk_end(shape_type const &start, shape_type const &stop) const
Create the end iterator for read-only iteration over all chunks intersected by the given ROI.
Definition multi_array_chunked.hxx:2454
int cacheSize() const
Number of chunks currently fitting into the cache.
Definition multi_array_chunked.hxx:1664
void setItem(shape_type const &point, value_type const &v)
Write the array element at index 'point'.
Definition multi_array_chunked.hxx:2244
shape_type chunkShape(shape_type const &chunk_index) const
Find the shape of the chunk indexed by 'chunk_index'.
Definition multi_array_chunked.hxx:1736
bool operator!=(MultiArrayView< N, U, C1 > const &rhs) const
Check if two arrays differ in at least one element.
Definition multi_array_chunked.hxx:1799
chunk_iterator chunk_end(shape_type const &start, shape_type const &stop)
Create the end iterator for iteration over all chunks intersected by the given ROI.
Definition multi_array_chunked.hxx:2437
shape_type const & chunkShape() const
Return the global chunk shape.
value_type getItem(shape_type const &point) const
Read the array element at index 'point'.
Definition multi_array_chunked.hxx:2221
shape_type chunkStart(shape_type const &global_start) const
Find the chunk that contains array element 'global_start'.
Definition multi_array_chunked.hxx:1708
shape_type chunkStop(shape_type global_stop) const
Find the chunk that is beyond array element 'global_stop'.
Definition multi_array_chunked.hxx:1722
bool isInside(shape_type const &p) const
Check if the given point is in the array domain.
const_iterator cend() const
Create the end iterator for read-only scan-order iteration over the entire chunked array.
Definition multi_array_chunked.hxx:2405
std::size_t cacheMaxSize() const
Get the number of chunks the cache will hold.
Definition multi_array_chunked.hxx:2357
virtual std::size_t overheadBytesPerChunk() const =0
Bytes of main memory needed to manage a single chunk.
MultiArrayView< N-1, T, ChunkedArrayTag > bindAt(MultiArrayIndex dim, MultiArrayIndex index) const
Create a lower dimensional view to the chunked array.
Definition multi_array_chunked.hxx:2265
std::size_t dataBytesPerChunk() const
Number of data bytes in an uncompressed chunk.
Definition multi_array_chunked.hxx:1697
chunk_const_iterator chunk_cend(shape_type const &start, shape_type const &stop) const
Create the end iterator for read-only iteration over all chunks intersected by the given ROI.
Definition multi_array_chunked.hxx:2471
MultiArrayView< N-1, T, ChunkedArrayTag > bind(difference_type_1 index) const
Create a lower dimensional view to the chunked array.
Definition multi_array_chunked.hxx:2281
bool operator==(MultiArrayView< N, U, C1 > const &rhs) const
Check if two arrays are elementwise equal.
Definition multi_array_chunked.hxx:1784
view_type subarray(shape_type const &start, shape_type const &stop)
Create a view to the specified ROI.
Definition multi_array_chunked.hxx:2180
std::string backend() const
Return the class that implements this ChunkedArray.
void checkoutSubarray(shape_type const &start, MultiArrayView< N, U, Stride > &subarray) const
Copy an ROI of the chunked array into an ordinary MultiArrayView.
Definition multi_array_chunked.hxx:2092
iterator end()
Create the end iterator for scan-order iteration over the entire chunked array.
Definition multi_array_chunked.hxx:2389
const_iterator end() const
Create the end iterator for read-only scan-order iteration over the entire chunked array.
Definition multi_array_chunked.hxx:2421
iterator begin()
Create a scan-order iterator for the entire chunked array.
Definition multi_array_chunked.hxx:2381
std::size_t overheadBytes() const
Bytes of main memory needed to manage the chunked storage.
Definition multi_array_chunked.hxx:1681
MultiArrayView< N-1, T, ChunkedArrayTag > bindInner(difference_type_1 index) const
Create a lower dimensional view to the chunked array.
Definition multi_array_chunked.hxx:2325
MultiArrayView< N-M, T, ChunkedArrayTag > bindInner(const TinyVector< Index, M > &d) const
Create a lower dimensional view to the chunked array.
Definition multi_array_chunked.hxx:2337
chunk_iterator chunk_begin(shape_type const &start, shape_type const &stop)
Create an iterator over all chunks intersected by the given ROI.
Definition multi_array_chunked.hxx:2428
void commitSubarray(shape_type const &start, MultiArrayView< N, U, Stride > const &subarray)
Copy an ordinary MultiArrayView into an ROI of the chunked array.
Definition multi_array_chunked.hxx:2114
TinyVector< MultiArrayIndex, N > type
Definition multi_shape.hxx:272
Base class for, and view to, vigra::MultiArray.
Definition multi_fwd.hxx:127
Main MultiArray class containing the memory management.
Definition multi_fwd.hxx:131
view_type::const_reference const_reference
Definition multi_array.hxx:2516
view_type::reference reference
Definition multi_array.hxx:2512
view_type::const_pointer const_pointer
Definition multi_array.hxx:2508
view_type::difference_type_1 difference_type_1
Definition multi_array.hxx:2528
view_type::pointer pointer
Definition multi_array.hxx:2504
view_type::value_type value_type
Definition multi_array.hxx:2500
Iterate over a virtual array where each element contains its coordinate.
Definition multi_iterator.hxx:89
Sequential iterator for MultiArrayView.
Definition multi_iterator.hxx:273
Class for fixed size vectors.
Definition tinyvector.hxx:1008
void uncompress(char const *source, std::size_t srcSize, char *dest, std::size_t destSize, CompressionMethod method)
detail::SelectIntegerType< 32, detail::UnsignedIntTypes >::type UInt32
32-bit unsigned int
Definition sized_int.hxx:183
bool allLessEqual(TinyVectorBase< V1, SIZE, D1, D2 > const &l, TinyVectorBase< V2, SIZE, D3, D4 > const &r)
pointwise less-equal
Definition tinyvector.hxx:1399
FFTWComplex< R > & operator+=(FFTWComplex< R > &a, const FFTWComplex< R > &b)
add-assignment
Definition fftw3.hxx:859
UInt32 ceilPower2(UInt32 x)
Round up to the nearest power of 2.
Definition mathutil.hxx:294
Int32 log2i(UInt32 x)
Compute the base-2 logarithm of an integer.
Definition mathutil.hxx:361
FFTWComplex< R > & operator-=(FFTWComplex< R > &a, const FFTWComplex< R > &b)
subtract-assignment
Definition fftw3.hxx:867
NumericTraits< V >::Promote prod(TinyVectorBase< V, SIZE, D1, D2 > const &l)
product of the vector's elements
Definition tinyvector.hxx:2097
bool allLess(TinyVectorBase< V1, SIZE, D1, D2 > const &l, TinyVectorBase< V2, SIZE, D3, D4 > const &r)
pointwise less-than
Definition tinyvector.hxx:1375
std::ptrdiff_t MultiArrayIndex
Definition multi_fwd.hxx:60
void compress(char const *source, std::size_t size, ArrayVector< char > &dest, CompressionMethod method)
Definition metaprogramming.hxx:130

© Ullrich Köthe (ullrich.koethe@iwr.uni-heidelberg.de)
Heidelberg Collaboratory for Image Processing, University of Heidelberg, Germany

html generated using doxygen and Python
vigra 1.12.2