Skip Navigation
Services

PhoenixSCC- Shared Compute Cluster


What is the Phoenix Project?

The Phoenix Project, based on Condor, is a specialized workload management system for compute-intensive jobs. Like other full-featured batch systems, this Shared Compute Cluster (SCC) provides a job queueing mechanism, scheduling policy, priority scheme, resource monitoring, and resource management. Users submit serial or parallel jobs to the SCC, the SCC places them into a queue, chooses when and where to run the jobs based upon a policy, carefully monitors their progress, and ultimately informs the user upon completion.

Over the summer of 2011, the Midrange Tech Support team was tasked with designing a compute cluster. Our resourceful team made use of recycled & decommissioned servers, made necessary hardware repairs, giving them new life in a shared compute cluster environment. This resurrection of decommissioned hardware has led us to appropriately name it, The Phoenix Project.

Our goal is to provide compute resources for our research community at no charge. However, to ensure we design a solution that has true value, we need to understand what the needs are and to dynamically evolve the environment to constantly meet those needs. It is this underlying reason that we designed this project. We met with several researchers, professors, scientists, and statisticians to talk about some of the diverse computational jobs that you need to run. These workloads might require fast, short-term compute resources, or long-term as measured in weeks or months. We invite you to help us unlock the potential of our shared compute cluster to meet your research needs.

How can this Shared Compute Cluster benefit you?

The SCC can streamline the researcher’s tasks by allowing the submission of many jobs at the same time. In this way, tremendous amounts of computation can be done with very little intervention from the user. Moreover, the SCC allows users to take advantage of computational power in a clustered environment all managed by a central application, rather than tie up resources on your personal computer.

A user submits a job to the SCC. The job is executed on a remote machine within the pool of machines available to the SCC and the results are sent back to the user. The types of jobs that the SCC can accommodate includes executables that would normally run in Unix/Linux, C language source code, or platform-specific jobs (currently LINUX/INTEL) that includes a long list of open source and commercial software.

The PhoenixSCC Infrastructure at Johns Hopkins

The hardware environment offers the following compute resources:

  • Management Nodes
    • 2 x Dell R815
  • Compute Nodes
    • IBM x3650's
    • Dell 2950's
    • Cisco UCS Blades
    • Dell C8220 high-performance sleds
  • Network
    • 2Gbps port-channel private network between nodes
  • Storage
    • HP EVA 8000 (24 TB raw capacity)
    • IBM Storwize V3700 (43 TB raw capacity) - Soon to replace the EVA

While Condor is the primary job scheduler, we are evaluating the following:

  • Torque
  • Grid Engine
  • SLURM

For a list of applications, please see our "Applications List."

To get started, please see this User Guide for instructions on using the environment.

For an overview of our deployment architecture, please click here.

For questions or comments, please contact The PhoenixSCC Team at Johns Hopkins at phoenixscc@johnshopkins.edu.