Custom Native Container [Part 3]: Parallel Job Using Min Max

Introduction

In previous parts of this series we looked into how to create a basic custom native container that can be used with jobs. This article will improve our NativeIntArray container to add support for parallel jobs. This is done by using a pattern where the job is split into ranges and each job is only allowed to operate on this range. This limits the array access to the index passed through Execute(int index). More information about how these jobs are scheduled behind the scenes can be found in the Unity documentation here.

The result of the previous article can be found here.
The final result of this article can be found here.

1) Enable Support

We add the [NativeContainerSupportsMinMaxWriteRestriction] tag to enable support for this kind of parallel job. We also have to create m_MinIndex and m_MaxIndex variables and initialize them with the entire range of our array. These variables are required for safety checking. Watch out, the naming and order of variables is very important here!
We will also use this opportunity to have a quick reminder of what our container roughly looked like: a simple array of integers.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
using System;
using System.Diagnostics;
using System.Runtime.InteropServices;
using System.Threading;
using Unity.Burst;
using Unity.Collections;
using Unity.Collections.LowLevel.Unsafe;
using Unity.Jobs;

// This enables support for parallel job exection where each worker thread 
// is only allowed to operation on a range of indices between min and max.
[NativeContainerSupportsMinMaxWriteRestriction]
[NativeContainerSupportsDeallocateOnJobCompletion]
[NativeContainer]
[StructLayout(LayoutKind.Sequential)] 
public unsafe struct NativeIntArray : IDisposable
{
    [NativeDisableUnsafePtrRestriction] internal void* m_Buffer;
    internal int m_Length;

#if ENABLE_UNITY_COLLECTIONS_CHECKS
	// NativeContainerSupportsMinMaxWriteRestriction expects the passed ranges it can operate on to be checked for safety.
	// The range is passed to the container when an parallel job schedules it's batch jobs.
	internal int m_MinIndex;
    internal int m_MaxIndex;

    internal AtomicSafetyHandle m_Safety;
    [NativeSetClassTypeToNullOnSchedule] internal DisposeSentinel m_DisposeSentinel;
#endif

    internal Allocator m_AllocatorLabel;

    public NativeIntArray(int length, Allocator allocator, NativeArrayOptions options = NativeArrayOptions.ClearMemory){ /* More Code */ }

    static void Allocate(int length, Allocator allocator, out NativeIntArray array)
    {
        long size = UnsafeUtility.SizeOf<int>() * (long)length;

		/* More Code */

        array = default(NativeIntArray);
        array.m_Buffer = UnsafeUtility.Malloc(size, UnsafeUtility.AlignOf<int>(), allocator);
        array.m_Length = length;
        array.m_AllocatorLabel = allocator;

#if ENABLE_UNITY_COLLECTIONS_CHECKS
        // By default the job can operate over the entire range.
        array.m_MinIndex = 0;
        array.m_MaxIndex = length - 1;

        DisposeSentinel.Create(out array.m_Safety, out array.m_DisposeSentinel, 1, allocator);
#endif
    }

	/*
	 * ... Next Code ...
	 */

2) Range checking

The only other change we need to make to the code is to check if we are within range when accessing an element in the array. All other functions that access the array in this container use the [] operator to do so, therefor it is enough to add our range checks to this operator only.

92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
	/*
	 * ... Previous Code ...
	 */

	// Remove calls to this function if safety is disabled.
	[Conditional("ENABLE_UNITY_COLLECTIONS_CHECKS")]
	private void CheckRangeAccess(int index)
    {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
		// Check if we're within the range of indices that this parallel batch job operates on.
		if (index < m_MinIndex || index > m_MaxIndex)
        {
            if (index < Length && (m_MinIndex != 0 || m_MaxIndex != Length - 1))
                throw new IndexOutOfRangeException(string.Format(
                    "Index {0} is out of restricted IJobParallelFor range [{1}...{2}] in ReadWriteBuffer.\n" +
                    "ReadWriteBuffers are restricted to only read & write the element at the job index. " +
                    "You can use double buffering strategies to avoid race conditions due to " +
                    "reading & writing in parallel to the same elements from a job.",
                    index, m_MinIndex, m_MaxIndex));

			// This is not a parallel job but the index is still out of range.
			throw new IndexOutOfRangeException(string.Format("Index {0} is out of range of '{1}' Length.", index, Length));
        }
#endif
    }

    public int this[int index]
    {
        get
        {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
            AtomicSafetyHandle.CheckReadAndThrow(m_Safety);
#endif
            CheckRangeAccess(index);
            return UnsafeUtility.ReadArrayElement<int>(m_Buffer, index);
        }

        [WriteAccessRequired]
        set
        {
#if ENABLE_UNITY_COLLECTIONS_CHECKS
            AtomicSafetyHandle.CheckWriteAndThrow(m_Safety);
#endif
            CheckRangeAccess(index);
            UnsafeUtility.WriteArrayElement(m_Buffer, index, value);
        }
    }

	/*
	* ... More Code ...
	*/

Usage

And that all! We now have added support for deallocation on job completion to our NativeIntArray. An example of this is shown below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
using Unity.Burst;
using Unity.Collections;
using Unity.Entities;
using Unity.Jobs;
using Unity.Mathematics;

public class NativeIntArraySystem : SystemBase
{
	[BurstCompile]
    struct ParallelWriteRangeJob : IJobParallelFor
    {
        public Random random;
		// See the previous part on how to add support for [DeallocateOnJobCompletion].
        [DeallocateOnJobCompletion] public NativeIntArray array;

        public void Execute(int index)
        {
            array[index] = random.NextInt();
        }
    }
	
	protected override void OnUpdate()
    {
        NativeIntArray myArray = new NativeIntArray(1024, Allocator.TempJob);

		// Fill myArray with random values.
        JobHandle jobHandle = new ParallelWriteRangeJob()
        {
            random = new Random((uint)UnityEngine.Random.Range(0, int.MaxValue)),
            array = myArray
        }.Schedule(myArray.Length, 64, Dependency); // Schedule with a batch size of 64.

		Dependency = jobHandle;
	}
}

Conclusion

This article showed how to add support for parallel jobs using a pattern where the job is split into ranges. But a limitation of this pattern is that it does not allow for multiple jobs to write to the same index. In the next part we will look how we can make this possible by adding support for ParallelWriter.

Custom Native Container [Part 1]: The Basics
Custom Native Container [Part 2]: Deallocate On Job Completion
Custom Native Container [Part 3]: Parallel Job Using Min Max
Custom Native Container [Part 4]: Parallel Job Using ParallelWriter
Custom Native Container [Part 5]: ParallelFor Using ParallelWriter With Thread Index