CETS Research IT Support

CETS supports research groups' large scale computing needs on a case by case basis. Based on our prior experience, we have developed two support models: massively parallel and embarrassingly parallel.

Massively Parallel Workloads

For massively parallel workloads, we provide a standard High Performance Computing cluster running a variant of Redhat Enterprise Linux on: one head node externally visible, using Torque and Maui to schedule jobs which run on internal compute nodes, with storage supplied by an internal storage node serving NFS to the compute nodes and head node. We manage all this with a standard set of administrative support hardware, including

Low latency specialty interconnects are supported for additional cost.

Embarrassingly Parallel Workloads

For embarrassingly parallel workloads, we provide a standard High Performance Computing grid running openSUSE on several externally visible compute nodes. Two cores on one compute node are reserved for running the Sun Grid Engine scheduler and queue manager. An externally visible storage node is optional, but it will only serve NFS to the compute nodes within the grid. We house all the equipment using

Built-in Services

Built into the standard models of support are these services

Throughout the operating lifecycle we provide

We will handle end-of-life responsible disposition and data retention options. We will work with end users to provision the necessary backup architecture.

Other Options

Beyond those two standard models, we can tailor our support to provide for your research needs. We can consult for an hourly rate to find a technical solution to meet those needs, and provide you with just quotes, purchase and configure the systems, and/or manage your solution (operating system, hardware, applications, user support) from beginning to end.

Data Center Details

Our data center offers

 

 

 

 

 

© Computing and Educational Technology Services | Report a Problem
cets@seas.upenn.edu | 215.898.4707