Encodeo™ – Video Transcoding on Demand

We are glad to announce a new service added to Hoopoe™ for video transcoding on demand.

Using the service allows users to transcode (convert) existing video files from various formats to the recent H.264 standard, at unmatched quality, speed and price.

Using GPU acceleration, we can convert HD movies and beyond at least x10 faster compared to existing equivalents.

Encodeo™ is not just a video transcoding service – it is possible to define advanced parameters for the transcoding process, such as:

  • Resolution
  • Bitrate
  • Filters / effect to apply on source video
  • and more…

If you are interested to hear more about the service and potential to use it, please contact us at: support@cass-hpc.com.

For more information: Encodeo™

World Cloud Computing Summit 2009

The 2nd annual cloud computing summit is about to take place in Shfayim, Israel, between December 2-3, 2009.

Following last year success, the event will cover recent developments and progress in cloud technologies. Presenting with top-of-the-line companies active in this field, including (partial list): Amazon, Google, eBay, IBM, HP, Sun, RedHat and more.

Additional “hands-on” labs and workshops are offered during the event for participants that would like to learn more about cloud technologies and integration possibilities.

We are also presenting Hoopoe at the summit, for GPU Cloud Computing, and providing a workshop on GPU Computing in general and Hoopoe as well.

This event ends 2009 and symbolically the last decade, marking cloud computing as a major development that we are about to see more and more in the next years.

You are invited to join us during the event.

Regular expression for Amazon S3 URL

Hello Everyone,

We added support for Amazon S3 storage services recently to Hoopoe. Following the previous article with our general account details, we wanted to share with you a regular expression we use for validating S3 URL as sources of data and files.

You may find more information about S3 naming conventions and requirements in the manuals available from http://aws.amazon.com/s3.

When submitting a task to Hoopoe with input/output sources from Amazon S3, one must specify the S3 URL of the resource. A simple format for a resource can be:
With this example, the bucket of the user storing the object is called “test-bucket“, and the file for input is “dir1/input.bin”, called the key of the object (in the bucket).

This is a general form for S3 URL to make them accessible over the internet.

Regular Expression

We are using a regular expression to validate all Amazon S3 URLs with submitted tasks to Hoopoe.

In .NET (and general) manners, the RegEx is:

As you may see, the following limitations exist:

  1. For DNS compatibility, bucket names must be lower case and start with a letter or number
  2. In S3, and following DNS limitations, bucket names should not exceed 63 characters in length
  3. Object keys can be of variable length, must start with a valid character but can follow with other possible characters, also to denote paths (a file named: “dir/input.bin” is located under “dir” directory)
  4. In addition to the above, Hoopoe restricts S3 URL to be up to 256 characters in length

In case you find a mistake in the regular expression definition, whether possible URLs do not fit or it is permissive, please send us an email.
We also hope you may find this information useful for your own purposes.

Amazon S3 Integration


We are pleased to announce that recently we added Amazon S3 services support, integrated to Hoopoe.

Using Amazon S3 services users can have extended storage support from Amazon Web Services (AWS), also communicating with other cloud systems, such as EC2 and more, to offer variety of processing capabilities.

Users who would like to use Amazon S3 can do it with a very intuitive interface, specifying the buckets and objects they use, following S3 semantics and terms, so Hoopoe can offer bi-directional communication with S3, for reading data, and outputting computed results.

We will follow with more articles presenting best practices guide for using Amazon S3 with Hoopoe.

As general information, users can use the following details to recognize Hoopoe in Amazon S3.

Hoopoe Amazon S3 details:

  • E-mail Username: support@cass-hpc.com
  • Canonical User ID: 939155fee5acfced9622d4a7df63e8a1fd54a24290a81871fd7d20f43aa758dd

We highly encourage users to use the email form for Hoopoe support when adding an ACL record in Amazon S3.

For more information about Amazon S3: http://aws.amazon.com/s3/

Hoopoe Cloud.

Using Hoopoe File System (HoopoeFS)

The Hoopoe File System service reference can be found at: http://www.hoopoe-cloud.com/HoopoeFS.asmx

This post will present the File System interface to Hoopoe distributing engine. The File System (FS) interface can be used by users to transfer data files to be processed by Hoopoe with CUDA computing kernels. After processing completes, the same interface can be used to read back computed results.


  • Features
  • General terms
  • API description
  • API examples
    • Creating new instance
    • Authenticating
    • Creating a file
    • Creating a directory
    • Creating a file under a sub-directory
    • Deleting a file
    • Writing data into files
    • Reading data from files

1. Features

HoopoeFS exposes a simple interface for data and file management. In general, most features available by general OS file systems are given by HoopoeFS service, providing high flexibility for users.

Taking security into consideration, every users is provided with a completely isolated environment, so no special security functions should be used or exposed, as every users sees, and able to access only the files he generated or uploaded.

As a simplified manner, the API provided by HoopoeFS is generic, but there are few limitations to user operations and capabilities. For example, a user is allowed to place files in the root directory or under sub directories. A user is also allowed to create only one level of sub directories being able to contain additional files.

2. General terms

As previously mentioned, HoopoeFS provides all general constructs for working with files and directories.

A file, is simply a container for data, either raw or compressed and can be named using every supported character.

A directory, is a container for files, and provided is the root directory, and further sub directories that can be created by the user.

3. API description

For data constructs (File, Directory), are provided management functions as follows:

  • CreateFile/CreateDirectory – allows to create a new file or directory, respectively. Calling these functions is required as a first step prior to accessing them.
  • DeleteFile/DeleteDirectory – deletes a previously created file or directory.
  • RenameFile/RenameDirectory – given existing files or directories, allows to modify their name.
  • WriteData/ReadData – modifies the content of a file (write) or reads content from a specific file.
  • GetFileSize – returns the number of bytes in a file.

For general operation, few more functions are given:

  • Authenticate – returns a value indicating whether the user is registered and recognized by HoopoeFS
  • IsUserOverQuota – returns a value indicating whether the user has exceeded the allowed storage space. In such case the user cannot create new files or directories, but can delete and read contents of files.

4. API examples

4.1 Creating new instance

In order to work with HoopoeFS, it is necessary to create a new instance of HoopoeFS class:
HoopoeFS hps = new HoopoeFS();

4.2 Authenticating

It is a good practice to check with HoopoeFS if we are authenticated, before performing futher operations. Every operation to be performed must use this authentication level:

Authentication a = new Authentication();
a.User = test@company_alias;
a.Password = "my_password"
hfs.AuthenticationValue = a;

4.3 Creating a file 

Creating a file is a simple task with HoopoeFS API:


4.4 Creating a directory

Following the previous example, a similar API can be used to create a new sub-directory (all directories are created under the root):


4.5 Creating a file under a sub-directory

Once created a sub directory, any number of files can be created under it.

To do that, the following operations are necessary:

Directory d = new Directory();
d.Name = "test_data";
hfs.DirectoryValue = d;

You may note, that once hfs.DirectoryValue is set, all file related operations correspond to the directory (creating new files, deleting, modifying etc.), so when working with the directory ends, hfs.DirectoryValue should be set to null.

4.6 Deleting a file

A very straight forward operation:


4.7 Writing data into files

The concept of writing data to files within Hoopoe maps to the real world, with a simplified API.

byte[] data = new byte[512*1024];
// Load/generate data

// Write the data, starting at offset 0 of the file
long offset = 0;
hfs.WriteFile("temp.dat", data, offset);

// If willing to write more data, then consider a
// new offset
offset += data.Length;

4.8 Reading data from files

The same rules for writing data apply to reading it from files.

// Read the data, starting at offset 0 of the file
long offset = 0;
// Determines the amount of bytes to read
int length = 512*1024;
byte[] data = hfs.ReadFile("temp.dat", offset, length);

// Past this point, data will contain the bytes
// that were read.
// In case fewer bytes than requested were read,
// the size of data will be consistent with the actual
// bytes read.

// If willing to read more data, then consider a
// new offset
offset += data.Length;

Security in Hoopoe

Security in cloud systems is always a major part of the system, and requires a great effort to deal with and develop.
It usually starts when users are given access to actual machines, so they can run applications using the operating system, whether it is Windows or Linux based.

Security models in Hoopoe

Hoopoe provides several features to overcome this problem.

Isolated user environment

Hoopoe provides each user with a unique, isolated, environment. This way, only the user can access its files and computations, using the specific mechanism provided by for file management and related operations.

Hiding the “metal”

Hoopoe hides the “metal” from the user, providing access only through a web service interface to communicate with the system.
Thus, the user is limited with the flexibility of the code it can run.
There is no direct access to machines, so the user is able to submit his task to Hoopoe for further processing of the system. After the submission point, the user waits for the task to finish, and copy the results back.

Independent data management

User data is managed by Hoopoe as files, either raw or compressed (using GZip).
A buffer is then read in a fully managed (.NET) environment, thus reducing the risk for malformed or “bad” files.

Running computations

Hoopoe is meant to run computations, and not serve as an operating system. By such, user tasks are compiled on demand for the platform it should be processed on (if 32/64 bit, or specific hardware support).

Computations are running on the GPU itself, and this is where the interaction with the GPU ends. Copying the relevant data, performing the computations and placing the results back in the appropriate buffer.

Annoucing Hoopoe – Cloud Services for GPU Computing

We are happy to introduce to you “Hoopoe”, a cloud solution for GPU computing.

You may have all expected it to be available sometime, and indeed it is.

Hoopoe provides a web service interface to communicate with. In the near future it will also provide machine level access to run specific applications like with regular CPU based clouds.

Partial feature list of the system:

  • CUDA Support
  • Executing CUDA kernels, FFT and BLAS routines
  • OpenCL Support
  • Executing OpenCL kernels
  • Fully secure – Check out

Take a further look at: Hoopoe™. The system will be open for alpha testing very soon so you are invited to register.