julia 1.10.10
slower than multiplication. While some arrays — like Array itself — are implemented using a linear chunk of memory and directly use a linear index in their implementations, other arrays — like Diagonal length(a) ÷ Threads.nthreads()) tasks = map(chunks) do chunk Threads.@spawn sum_single(chunk) end chunk_sums = fetch.(tasks) return sum_single(chunk_sums) end sum_multi_good (generic function with 1 Consequently, a good multiprocessing environment should allow control over the "ownership" of a chunk of memory by a particular CPU. Julia provides a multiprocessing environment based on message passing0 码力 | 1692 页 | 6.34 MB | 3 月前3Julia 1.10.9
slower than multiplication. While some arrays — like Array itself — are implemented using a linear chunk of memory and directly use a linear index in their implementations, other arrays — like Diagonal length(a) ÷ Threads.nthreads()) tasks = map(chunks) do chunk Threads.@spawn sum_single(chunk) end chunk_sums = fetch.(tasks) return sum_single(chunk_sums) end sum_multi_good (generic function with 1 Consequently, a good multiprocessing environment should allow control over the "ownership" of a chunk of memory by a particular CPU. Julia provides a multiprocessing environment based on message passing0 码力 | 1692 页 | 6.34 MB | 3 月前3Julia 1.11.4
slower than multiplication. While some arrays — like Array itself — are implemented using a linear chunk of memory and directly use a linear index in their implementations, other arrays — like Diagonal chunks for parallel work. We then use Threads.@spawn to create tasks that individually sum each chunk. Finally, we sum the results from each task using sum_single again: julia> function sum_multi_good(a) length(a) ÷ Threads.nthreads()) tasks = map(chunks) do chunk Threads.@spawn sum_single(chunk) end chunk_sums = fetch.(tasks) return sum_single(chunk_sums) end sum_multi_good (generic function with 10 码力 | 2007 页 | 6.73 MB | 3 月前3Julia 1.11.5 Documentation
slower than multiplication. While some arrays — like Array itself — are implemented using a linear chunk of memory and directly use a linear index in their implementations, other arrays — like Diagonal chunks for parallel work. We then use Threads.@spawn to create tasks that individually sum each chunk. Finally, we sum the results from each task using sum_single again: julia> function sum_multi_good(a) length(a) ÷ Threads.nthreads()) tasks = map(chunks) do chunk Threads.@spawn sum_single(chunk) end chunk_sums = fetch.(tasks) return sum_single(chunk_sums) end sum_multi_good (generic function with 10 码力 | 2007 页 | 6.73 MB | 3 月前3Julia 1.11.6 Release Notes
slower than multiplication. While some arrays — like Array itself — are implemented using a linear chunk of memory and directly use a linear index in their implementations, other arrays — like Diagonal chunks for parallel work. We then use Threads.@spawn to create tasks that individually sum each chunk. Finally, we sum the results from each task using sum_single again: julia> function sum_multi_good(a) length(a) ÷ Threads.nthreads()) tasks = map(chunks) do chunk Threads.@spawn sum_single(chunk) end chunk_sums = fetch.(tasks) return sum_single(chunk_sums) end sum_multi_good (generic function with 10 码力 | 2007 页 | 6.73 MB | 3 月前3Julia v1.2.0 Documentation
slower than mul�plica�on. While some arrays — like Array itself — are implemented using a linear chunk of memory and directly use a linear index in their implementa�ons, other arrays — like Diagonal — cache. Consequently, a good mul�processing environment should allow control over the "ownership" of a chunk of memory by a par�cular CPU. Julia provides a mul�processing environment based on message passing quite different. In a DArray, each process has local access to just a chunk of the data, and no two processes share the same chunk; in contrast, in a SharedArray each "par�cipa�ng" process has access to0 码力 | 1250 页 | 4.29 MB | 1 年前3Julia v1.1.1 Documentation
slower than mul�plica�on. While some arrays — like Array itself — are implemented using a linear chunk of memory and directly use a linear index in their implementa�ons, other arrays — like Diagonal — cache. Consequently, a good mul�processing environment should allow control over the "ownership" of a chunk of memory by a par�cular CPU. Julia provides a mul�processing environment based on message passing quite different. In a DArray, each process has local access to just a chunk of the data, and no two processes share the same chunk; in contrast, in a SharedArray each "par�cipa�ng" process has access to0 码力 | 1216 页 | 4.21 MB | 1 年前3Julia 1.2.0 DEV Documentation
slower than mul�plica�on. While some arrays — like Array itself — are implemented using a linear chunk of memory and directly use a linear index in their implementa�ons, other arrays — like Diagonal — cache. Consequently, a good mul�processing environment should allow control over the "ownership" of a chunk of memory by a par�cular CPU. Julia provides a mul�processing environment based on message passing quite different. In a DArray, each process has local access to just a chunk of the data, and no two processes share the same chunk; in contrast, in a SharedArray each "par�cipa�ng" process has access to0 码力 | 1252 页 | 4.28 MB | 1 年前3Julia v1.9.4 Documentation
slower than multiplication. While some arrays — like Array itself — are implemented using a linear chunk of memory and directly use a linear index in their implementations, other arrays — like Diagonal length(a) ÷ Threads.nthreads()) tasks = map(chunks) do chunk Threads.@spawn sum_single(chunk) end chunk_sums = fetch.(tasks) return sum_single(chunk_sums) end CHAPTER 24. MULTI-THREADING 303 sum_multi_good Consequently, a good multiprocessing environment should allow control over the "ownership" of a chunk of memory by a particular CPU. Julia provides a multiprocessing environment based on message passing0 码力 | 1644 页 | 5.27 MB | 1 年前3Julia 1.9.3 Documentation
slower than multiplication. While some arrays — like Array itself — are implemented using a linear chunk of memory and directly use a linear index in their implementations, other arrays — like Diagonal length(a) ÷ Threads.nthreads()) tasks = map(chunks) do chunk Threads.@spawn sum_single(chunk) end chunk_sums = fetch.(tasks) return sum_single(chunk_sums) end CHAPTER 24. MULTI-THREADING 303 sum_multi_good Consequently, a good multiprocessing environment should allow control over the "ownership" of a chunk of memory by a particular CPU. Julia provides a multiprocessing environment based on message passing0 码力 | 1644 页 | 5.27 MB | 1 年前3
共 87 条
- 1
- 2
- 3
- 4
- 5
- 6
- 9