New CLFORTRAN examples and update

New examples are available on the CLFORTRAN page.

These include:

  • Quering platforms and devices information
  • Creating OpenCL context and command queue
  • Basic device IO with Fortran arrays and validity testing

In addition, CLFORTRAN API was improved for better OpenCL functionality support in Fortran.

Announcing CLFORTRAN

We are pleased to announce CLFORTRAN for GPGPU.

CLFORTRAN is a new and elegant Fortran module that allows integration of OpenCL with Fortran programs easier than ever.
Taking advantage of Fortran language features, it is written in pure Fortran – aka no C/C++ code is required to utilize the GPGPU.

CLFORTRAN is compatible with all major compilers: GNU, Intel and IBM, and supporting OpenCL 1.2 API.
In addition, it is provided as open source and licensed under LGPL, to allow scientific computing at massive scales and all supported vendors.

You may read more at CLFORTRAN.

Intel® Released the Xeon® Phi™ Processor (MIC)

Intel® have just introduced their Xeon® Phi™ processor to the market, targeting HPC and scientific computing. It is available for purchase and integration into existing systems/platforms/servers.

The new co-processor is a discrete device that runs an operating system of its own and functions as a fully functional computer (though being a co-processor).

Why should anyone be interested in Xeon® Phi™?

It is based on the most common x86 architecture, therefore porting existing code and algorithms should be the easiest possible.
One may also utilize OpenCL™ algorithms to take advantage of the high-parallelism of the Phi™ processor.

In addition, it features 60 cores, 8GB of internal memory (with 320 GB/s) and uses PCIe x16 slot to provide high performance bandwidth throughput.
With almost 1 TFLOPS of double precision, Phi™ is competent to very high end GPUs in the market today, but on some aspects, provides better performance and industrial matching than other vendors.

You can read more at:

http://www.intel.com/content/www/us/en/processors/xeon/xeon-phi-detail.html

Contact us for more details and projects regarding Intel® Xeon® Phi™ family of processors.

Using CUDA FFT from FORTRAN

In this post we will try to demonstrate how to call CUDA FFT routines (CUFFT) from a FORTRAN application, using the native CUDA interface and our bindings.

CUFFT usage

CUFFT library by NVIDIA, follows FFTW library manners to run FFTs.
For example, executing a 2D FFT over a 256×256 data set involves the following steps.

General GPU steps:

  1. Select the GPU device to work with
  2. Allocate enough device memory to store data
  3. Transfer input data to device

FFT steps:

  1. Create FFT plan with specific dimensions
  2. Execute FFT on device with input and output parameters
  3. Destroy FFT plan

After computing steps:

  1. Copy results back to CPU memory (RAM)
  2. Release device memory

Let’s code

General GPU steps

To select the device we want to work with we can take two possible ways. One is to use the driver interface, and the 2nd is to use the runtime interface.

Selecting a device with CUDA driver is a bit more complicated but adds more levels of flexibility.


# Initialize CUDA, default flags
call cuInit(0)
# Get a reference to the 1st device in the system
# recognized by CUDA
call cuDeviceGet(idev, 0)
# Now, create a new context a bind it to the
# device we got before
call cuCtxCreate(ictx, 0, idev)

This code fragment is relevant to clause 1 of general GPU steps, as we actually selected the device to work with, to be the 1st in the system.

Allocating device memory can be done using cuMemAlloc function of CUDA.
For example:


# Allocate memory for array of nx * ny with real
# complex elements
call cuMemAlloc(iptr, inx * iny * 4 * 2)

This one, maps to step 2 of general GPU steps.

To copy memory from CPU to GPU, or device, we need to issue cuMemcpyHtoD meaning Host->Device copy.


# Assume that data was defined as COMPLEX data(inx, iny)
call cuMemcpyHtoD(iptr, data, inx*iny * 4 * 2)

This maps to step 3 of general GPU steps.

By that we have finished to prepare the data on the GPU and we are ready to run the FFT routine.

FFT steps

Using CUFFT library is relatively easy using the following example.


# Here we create the FFT plan, note that dimensions
# of the FFT are specified in this stage so this plan
# can be reused later.
# The last parameter denotes the type of FFT to perform:
# Real->Complex, Complex->Real or Complex->Complex,
# The value 0x29 represents Complex->Complex, while
# it is possible to create a constant for this purpose.
call cufftPlan2d(iplan, inx, iny, 0x29)

This maps to step 1 of FFT steps, to create an FFT plan.

When we have the plan we can simply execute our requested FFT and get back results


# Execute the FFT according to our plan. Specifying
# iptr for input & output means in place FFT.
# It is possible to store the results in a different buffer.
# The value -1, denotes the direction of FFT, where
# -1 is forward and 1 is inverse.
call cufftExecC2C(iplan, iptr, iptr, -1)

This maps to step 2 of FFT steps.

After we managed to execute our FFT and finished working with it, it is now time to release the resources consumed by the FFT library.


# Destroy the FFT plan
call cufftDestroy(iplan)

Here we completed our FFT steps.

After computing steps:

Computations using the GPU are now over, we can copy the results back to CPU memory for further computations.


# Use the Device->Host function to copy the
# computed data from GPU to CPU.
call cuMemcpyDtoH(data, iptr, inx*iny * 4 * 2)

This maps to step 1 of after computing steps. After this copy command, data computed by the GPU will be available in “data” array variable.

Now we shall release GPU resources used during our computation


# Free the GPU memory we allocated previously
call cuMemFree(iptr)
# Unbind the CUDA context, this step happens in any case
# when the process exits, but it's a good habit
# to follow that
call cuCtxDestroy(ictx)

This is it, our entire code is over, and we used the GPU to compute FFT.

Final words

This example showed the usage of FFT computations using the GPU with CUDA framework by NVIDIA. FFT is a very important tool for many applications and scientific computations. The GPU can significantly improve performance with FFT computations, by many factors compared to the CPU.

Compiling

If using gfortran, g77, g95 or ifort under Linux, to compile the above code in FORTRAN simple issue the command:


gfortran fft.f cuda.o cufft.o -lcufft -lcuda

Where gfortran can be replaced by any of your favoured compiler. Libraries libcufft.so and libcuda.so come as part of NVIDIA CUDA Toolkit release and driver, so they are present on a machine having them installed. Files cuda.o and cufft.o contain the bridge code needed for FORTRAN to C communication.