Task
For this assignment, you may work with a single partner or may work alone.
-
Download smooth.tar on a Linux system. [Last updated: 2018-04-11 22:40]
-
Run
tar xvf smooth.tar
to extract the tarball, creating the directorysmooth
. -
Edit
smooth.c
to make theteam_t
structure to include the your name(s) and computing ID(s). You also need to supply a team name. We will publicly post the last performance results on a scoreboard, which will use whatever team name you supply. which will use whatever team name you supply. (There will also be a more detailed result viewer only visible to you.) -
In
smooth.c
, create new implementations ofsmooth
that perform the same calculation asnaive_smooth
, but will be faster. Modifyregister_smooth_functions
to include each of these functions.We provide very detailed hints about how to do this below.
To achieve the desired speedup, you will probably need to use vector instrinsics, and may wish to refer to this reference.
You may not attempt to interfere with the code that will time your new implementations of
rotate
. (See other rules below.)Your functions only need to work for multiple-of-32 dimension values.
-
Test your implementation to make sure it is correct and to see what average speedup it gets. To get full credit, your speedup should be at least 3.8 on our testing machine. (See grading below for more details.)
-
You can run
make
to build./benchmark
to test your rotate implementation on your own machine or on a lab machine. Usually, if version A of rotate is significantly faster on your machine, it will also be faster on ours, but the amount faster may vary significantly.You should always do this before submitting something for our server to run to make sure your new version of rotate performs the correct calculation.
-
If you submit
smooth.c
to archimedes, we will time it on our machine and post the results here. It may take 30-60 miuntes for us to run your submission, and perhaps more if we receive a lot of submissions in a short amount of time.
-
Collobaration Policy
The following kinds of conversations are permitted with people other than your partner or course staff:
-
drawings and descriptions of how you want to handle memory accesses across an image
-
code snippets explaining optimizations, but only for code unlike the smooth and rotate problems we are discussing
-
discussion of the code we provide you: how
RIDX
works, what the naive implementations are doing, etc.
Other rules
Violations of the following rules will be seen as cheating, subject to penalties that extend beyond the scope of this assignment:
- You must not modify or interfere with the timing code
- You must not attempt to inspect or modify the network, disk, or other grading platform resources
Additionally, the following rules may result in grade penalties within this assignment if violated:
- You must write valid C code. (But you may use GCC-specific extensions.)
- You should not turn in code that contains print statements.
- Your code must work (i.e., the same functionality as the provided naive implementations) for any image of any multiple-of-32 dimension (32, 64, 96, etc).
- Your code must not use the AVX instructions. (Our testing machine supports them, but they do not work on the lab machines, and we want to ensure that everyone has fair access to a testing environment.)
Hints
In order the achieve the required speedup, you will almost certainly need to use SSE intrinsics to implement smooth. With the compiler setings we are using, I have gotten speedups of around 2.6x without the SSE intrinsics, but our target is 3.8x. We have provided a guide to SSE intrinsics. For the rotate part of the assignment, we will not provide much guidance about what optimization techniques you must use. But, since most of the work in the smooth part of the assignment will be figuring out how to effectively use the SSE intriniscs, we will provide pretty detailed guideance on an effective strategy to use. This is not the only strategy you can use, and you can definitely get more speed by applying additional optimizations.
(With more aggressive compiler settings than we have chosen, our compiler will sometimes generate SSE instructions and sometimes not, depending on exactly how the code is written. In this assignment, we are trying to give practice that would be useful in the relatively common scenarios where the compiler cannot figure out how to vectorize the function for some reason, and give you an idea what the compiler is doing when it vectorizes the loop.)
Strategy
There are three steps we recommend taking when implementing smooth:
- First, separate out the special cases of the edges and corners from the loop over each pixel.
- Then, focus on the loop over the pixels in the middle of the image. Fully unroll the nested loop over the 3x3 square of pixels.
- Then, convert that fully unrolled loop to use vector instructions to add whole pixels (arrays of 4 values) at a time rather than adding one value at a time. Make sure the vector instructions use multiple accumulators (the processor can start multiple vector-adds at a time).
- Then, do one or both of the following:
- A. Convert your loop over each pixel to act on pairs of pixels instead of single pixels; or
- B. Perform the division by 9 and conversion from 16-bit intermediate values to 8-bit intermediate values using the SSE intrinsics
On our test machine with our reference implementations:
- performing steps 1 gets about a 1.5x speedup;
- performing steps 1-2 gets about a 2.6x speedup;
- performing steps 1-3 gets about a 3x speedup;
- performing steps 1-3, and option A from step 4 gets about a 4x speedup;
- performing steps 1-3, and option B from step 4 gets about a 5x speedup;
- performing steps 1-3, and both option A and B from step 4 gets about an 8x speedup
There may be other ways, perhaps more efficient, of vectorizing and optimizing the computation. Your grade will be based on the speedup you get, not whether you follow our recommendations.
Separating out the special cases
The main loop of smooth has three different cases:
- The destination pixel is in the middle of the image, so we load 9 pixels, add them together, then divide by 9.
- The destination pixel is on a corner of the image, so we load 4 pixels (from varying sides of the destination pixel), add them together, then divide by 4.
- The destination pixel is on the edge of the image, so we load 6 pixels (from varying sides of the destination pixel), add them together, then divide by 6.
The provided code figures out which of these 3 cases we are in for every pixel. Since the calculations we perform for each pixel are quite simple, this check is costly. Instead, you can write a loop over the middle of the image, which always loads 9 pixels, and separate loops for the corners and edges.
When you’ve created a separate loop for the middle of the image, remove redundant computations from it:
- the inner loop doesn’t need to check whether i + 1, i - 1, etc. is beyond the bounds of the image
- the inner loop can hard-code the number
9
to divide by rather than counting how many pixels are added
When you do so, it would make sense to inline the code from avg
or creating a separate version of the avg
function.
We recommend inlining the code
so that the compiler is more likely to perform optimizations that avoid recomputing
values (like i*dim
) between iterations of the loop over the middle of the image.
A note on division
An additional problem removing special cases in the loop solves is that we compute the number to divide by for every pixel. This means that the compiler usually needs to generate a divide instruction to perform the division. When the compiler knows that it is performing a division by a constant, it can do something much faster.
For example, it turns out that 1/9 is approximately equal to 0x71c7
/0x40000
. This means that the compiler can compile
x / 9
into
((x + 1) * 0x71c7) >> 18
as long as it knows x
is small enough (and unsigned). Even though this performs many more instructions than using a division instruction, the division
instruction would take many 10s of cycles, and so is considerably slower than performing the calculation this way. But this optimization is only
possible if the compiler knows that it is dividing by a constant so it can precompute (in this case) 0x71c7
and 18
.
Unroll the inner-most loops
At this point, you probably have a set of loops over a 3x3 square of pixels. Unroll these loops completely (so there are nine copies of the loop body).
Add pixels with vector instructions.
For this part and anything else with vector instructions, refer to our general SIMD reference and our SIMD lab:
Loading pixels into vector registers
Use vector instructions to load each of the 9 pixels into its own __m128i
.
Since pixels are 32-bits, using _mm_loadu_si128
will end up loading 4 pixels
into an __m128i
. This is okay; we just won’t use all of 128 bits loaded each time. (Later on, you can work
on eliminating this waste, if necessary.)
So, for example:
// load 128 bits (4 pixels)
__m128i the_pixel = _mm_loadu_si128((__m128i*) &src[RIDX(i, j, dim)]);
will make the_pixel
contain, when interpreted as 8-bit integers:
{
src[RIDX(i, j, dim)].red,
src[RIDX(i, j, dim)].green,
src[RIDX(i, j, dim)].blue,
src[RIDX(i, j, dim)].alpha,
src[RIDX(i, j+1, dim)].red,
src[RIDX(i, j+1, dim)].green,
src[RIDX(i, j+1, dim)].blue,
src[RIDX(i, j+1, dim)].alpha
src[RIDX(i, j+2, dim)].red,
src[RIDX(i, j+2, dim)].green,
src[RIDX(i, j+2, dim)].blue,
src[RIDX(i, j+2, dim)].alpha
src[RIDX(i, j+3, dim)].red,
src[RIDX(i, j+3, dim)].green,
src[RIDX(i, j+3, dim)].blue,
src[RIDX(i, j+3, dim)].alpha
}
For now, we will simply ignore the last 12 values we loaded. (Also, you do not need to worry about exceeding the bounds of the source array; going past the end of the array by 12 bytes should never segfault.)
In order to perform our calculations accurately, we need to convert 8-bit unsigned chars into 16-bit unsigned shorts in order
to avoid overflow. You can do this with the _mm_cvtepu8_epi16
intrinsic function, which will take the first eight 8-bit values
from a vector and construct a vector or eight 16-bit values. For now, we only use the first four values in these vectors (though,
of course, the last four values will still be computed by the processor).
Adding pixels all at once
After loading each of the pixels into a __m128i
, you can add the pixels using _mm_add_epi16
.
To do this most efficiently, recall our discussion of multiple accumulators.
After you’ve added the pixel values together into a vector of 16-bit values extract the values by storing to a temporary array on the stack:
__m128i sum_of_pixels = ...;
unsigned short pixel_elements[8];
_mm_storeu_si128((__m128i*) pixel_elements, sum_of_pixels);
// pixel_elements[0] is the red component
// pixel_elements[1] is the green component
// pixel_elements[2] is the blue component
// pixel_elements[3] is the alpha component
// pixel_elements[4] through pixel_elements[7] are extra values we had stored
Use the values you extracted to perform the division and store the result in the destination array. Below, we have hints if you want to perform the division and final storing of the result using vector instructions instead of one pixel component at a time.
Working on pairs of destination pixels
In the previous part, you would have added 8 pairs of 16-bit values together, even though you only used 4 of the 8 16-bit results.
To fix this, let’s work on computing a pair of destination pixels at once. In particular if part of our source image looks like this:
A B C D
E F G H
I J K L
we could compute A + B + C + E + F + G + I + J + K (destination pixel F) and B + C + D + F + G + H + J + K + L (destination pixel G) at the same time.
To do this, observe that when we load 8 values (converted 16-bit integers) from the source image, we get two pixels. In particular, if pixels A and B are
adjacent in memory, then, by loading from A, we get a __m128i
containing
{A.red, A.green, A.blue, A.alpha, B.red, B.green, B.blue, B.alpha}
and by loading from B we get a __m128i
containing
{B.red, B.green, B.blue, B.alpha, C.red, C.green, C.blue, C.alpha}
If we add these together using _mm_add_epi16
, we get an __m128i
containing the result of A+B
and B+C
:
{A.red + B.red, A.green + B.green, ..., B.red + C.red, B.green + C.green, ...}
. This is part of both the computation for
destination pixel F and destination pixel G, so we can take advantage of this to help compute both in parallel.
By taking advantage of this, you can make the loop work on pairs of values which are adjacent in the source array. In fact, you will probably find that you are already computing the value for the next pixel.
Note that because our images always have even sizes, you don’t need to worry about the case when not all pixels have another to be paired with.
In this case, it happens that loading 128 bits from memory got us two pixels we wanted to work on parallel. If that wasn’t the case, there are
instructions to “shuffle” the bytes in the %xmm
registers. Some of them can be accessed via intrinsic functions containing the word “shuffle”.
(If you didn’t have these instructions, you could still rearrange the pixels using ordinary, non-vectorized code.)
Performing the division with SSE instructions
As noted above, dividing a 16-bit number x
by 9 can be performed using the calculation:
((x+1) * 0x71c7) >> 18
Perform this calculation using the vector instructions. You may wish to refer to our reference of some additional vector intrinsic functions.
Hint: Although this is the formula for dividing a 16-bit number by 9, you need more than the lower 16 bits of
the result of the multiplication — otherwise shifting by 18 will always yield 0 (or -1) rather than something useful.
You can, however, avoid doing the entire calculation using 32-bit numbers by taking advantage of _mm_mulhi_epi16
.
Then, you need convert 128-bit vector back from 16-bit to 8-bit numbers. To avoid the extra cost
of doing this one pixel component at a time, the easiest way to do this is probably with the _mm_shuffle_epi8
intrinsic.
To use the _mm_shuffle_epi8
intrinsic, you will need a “mask” that tells this instruction to select the 0th, 2nd, 4th, etc. bytes of the vector, to remove the upper byte of each 16-bit value. See the our reference page for an example of how these masks work.
If you want to avoid the cost of copying pixels to a temporary variable and then copying them to the destination image, there are SSE intrinsics that correspond to functions that write only 32 bits (one pixel) or 64 bits (two pixels) from the vector. You can use these to write one or two pixels at a time directly to the image, rather than writing values to the stack and copying them to the image.
Extra optimizations
In addition to the above optimization, there are ways that can probably get more speedup:
- Perform additional loop optimizations, such as more loop unrolling or use of multiple accumulators.
- Rearranging the vectorized code to avoid loading the same pixels multiple times or adding the same pairs of pixels multiple times
- Cache optimizations (maybe?)
- More we haven’t thought to mention.
Code
This section is essentially the same as rotate’s section.
Structures we give you
A pixel is defined in defs.h
as
typedef struct {
unsigned char red;
unsigned char green;
unsigned char blue;
unsigned char alpha;
} pixel;
(The “alpha” component represents transparency.)
In memory, this takes up 4 bytes, one byte for red, followed by one byte for green, and so on.
Images are provided in flattened arrays and can be accessed by RIDX
, defined as
#define RIDX(i,j,n) ((i)*(n)+(j))
by the code nameOfImage[RIDX(index1, index2, dimensionOfImage)]
.
All images will be square and have a size that is a multiple of 32.
What you should change
In smooth.c
you will see several rotate and several smooth functions.
-
naive_smooth
which shows you what a naive implementation looks like. Your code must produce the same result as this. -
You may add as many other smoothmethods as you want. You should put each new optimization idea in its own method:
smooth_outer_loop_unrolled_3_times
,smooth_with_2_by_3_blocking
, etc. The driver will compare all your versions as long as you register them in theregister_smooth_functions
methods.
The source code you will write will be linked with object code that we supply into a benchmark
binary. To create this binary, you will need to execute the command
$ make
You will need to re-make the benchmark program each time you change code in rotate.c
. To test your implementations, you can run the command:
$ ./benchmark
If you want to only run a particular function, you can run
$ ./benchmark 'foo'
to only run benchmark functions whose name contains “foo”.
Note that the benchmark results on your machine and the shared lab servers are often different than our grading machine. Generally, we’ve found that optimizations that make code significantly faster on other machines usually make code significantly faster on our grading machine. However, the amount of speedup may be quite different.
Measuring on our grading machine
You can measure the performance on our grading machine by submitting a version of smooth.c
to archimedes.
We will periodically scan for new submissions and run them on our grading server.
You (or your partner) can view detailed results here, which include the times for each version you submitted.
In addition, your last result will appear on a public scoreboard.
Correctness Testing
If one of your implementation produces a wrong answer, you can test it with the ./test
program,
which will show you its complete input and output.
To run this, first build it by running
$ make
then choose a size to test (e.g. 4
), and to test your smooth function named smooth_bar
use:
$ ./test "smooth_bar" 4
The ./test
program will run all test functions whose description contains the supplied string. For example,
the above command would run a function whose description was smooth_bar: version A
as well was one whose description was bad_smooth_bar_and_baz: version B
.
Note that ./test
can run your program on sizes that are not required to work. In particular, we do not
require your functions to work on non-multiple-of-32 sizes, but you may find sizes smaller than
32 convenient for debugging.
Grading
The benchmark program tests your program at several sizes, computes the speedup over the naive code at each size, then takes the geometric mean of these results to get an overall speedup number. This is what we will use to grade. * Speedups vary wildly by the host hardware. I have scaled the grade based on my timing server’s hardware so that particular strategies will get 75% and 100% scores.
Rotate will each be weighted as a full homework assignment in gradebook.
Rotate will get 0 points for 1.0× speedups on my computer, 75% for 2.85×, and 100% for 3.75× speedups, as expressed in the following pseudocode:
if (speedup < 1.00) return MAX_SCORE * 0;
if (speedup < 2.85) return MAX_SCORE * 0.75 * (speedup - 1.0) / (2.85 - 1.0);
if (speedup < 3.75) return MAX_SCORE * (0.75 + 0.25 * (speedup - 2.85) / (3.75 - 2.85));
return MAX_SCORE;
If you and your partner submit many times, your grade will be based on the version submitted that results in the best score, taking late penalties into account.
About our Testing Server
We will be testing the performance of this program on our machine.
We will be build your programs with the same compiler as on the lab machines. For this compiler
gcc-5 --version
outputs gcc-5 (Ubuntu 5.4.1-2ubuntu1~14.04) 5.4.1 20160904
. We
will compile your submitted files using the options -g -Wall -O2 -std=gnu99 -msse4.2
.
Our testing server has an Intel “Skylake” processor with the following caches:
- A 32KB, 8-way set associative L1 data cache;
- A 32KB, 8-way set associative L1 instruction cache;
- A 256KB, 4-way set associative L2 cache, shared between instructions and data;
- A 8MB, 16-way set associative L3 cache, shared between instructions and data;
The size of a cache block in each of these caches is 64 byte.
Things about our processor that some students might want to know but probably aren’t that important:
- Our processor also has a 4-way set associative 64-entry L1 data TLB, an 8-way set associative 64-entry L1 instruction TLB, and an 6-way set associative 1536-entry L2 TLB (shared between instructions and data).