CRID Systems and Compute

CRID Team supports multiple platforms, departments for your research computing needs. 

 

1500+

Hosted Virtual Machines

3

Cloud Service Providers

103000+

Average compute hours per month

Get Help

CRID Team Consultation

Systems and Research Computing Services 

Includes data center hosting services, resource computing, and general server support and administration.

Areas of Service

  • Supported Configurations

    Type:

    Rack Hosted Physical Server(s)

    Virtual Servers

     

    OS Configurations:

    Windows

    • 2019
    • 2022

     

    Linux Supported OS Configuration:

    • Ubuntu
    • Rocky
    • Oracle Linux
  • Features
    • Highly available
    • Scalable
    • Backups available upon request
    • Supported/managed by systems administrators
    • Customizable
  • Service Expectations and Limits

    Max Total Allowable Size for any Single VM:

    vCPU Cores: 12 vCPU

    Memory: 32 GB

    Disk: 2TB

    *Cluster builds available see HPC Consultation

  • What's PACC?

    PACC is a joint initiative of IT and Research departments. By combining resources and funding, these departments can acquire a cutting-edge HPC system that can manage complex calculations and big datasets with speed. This system can be used to speed up research and development, enhance data analysis, and enable various applications. By cooperating, the IT and Research departments can ensure that the HPC system is well maintained and used to its maximum capacity, serving UT Health research community.

  • HPC Shared Computing Cluster

    Condo Computing:

    This provides the researcher with much greater flexibility than owning a standalone cluster. The PACC Cluster is shared between researchers who can buy nodes for the cluster, subscribe, or buy SUs for short term access. ESO provides all of the redundant infrastructure, data center facility, networking, redundant head nodes, login nodes, spare nodes, and cluster management. Purchased nodes will be implemented with priority access to the contributor. Researchers may divide their compute-core hours allocation among members of their research group. 

     

    More Information

  • PACC Support Model
    • Cluster management, proactive monitoring, and support
    • Implement industry best practices
    • Triage and vendor management for hardware and licensed software (Bright management)
    • Training for job, user and queue management
    • Setup test queue to allow image testing
    • Online documentation
    • Monthly or Quarterly utilization reporting
    • Base OS, Scheduler, Cluster Manager (CMS) support and fixes
    • Recommendation for improvement
  • Platforms

    Azure Virtual Desktop


    VMWare Horizon

  • Service Options
    • Persistent
    • Non-Persistent
    • Published Application  
  • Benefits and Features
    • Flexibility - Run legacy and homegrown applications from any device; run Windows-only applications on non-Windows devices; specialized software can be added, updated, or removed quickly
    • Mobility - Access from anywhere, any device
    • Availability - Redundant power, network, and systems datacenter
    • Security - Allow secure access to sensitive data; remove content from the endpoint device; reduce threats from theft or compromise from the client device
    • Manageability - Simplify desktop support and management; manage automated upgrades, patches, and version control
    • Extend the life of older hardware - Users don't need to buy their own copies of software to be installed on their devices

Testimonials

  • Jaynal is a complete professional. He helped with our projects in a timely and dedicated manner. And was able to find solution for a problem persisting for a long time.

    Muralidharan Sargurupremraj, PhD

Introduction to PACC Cluster

 

PACC-Partnership of Advanced Computing Community

PACC is a joint initiative of IT and Research departments. By combining resources and funding, these departments can acquire a cutting-edge HPC system that can manage complex calculations and big datasets with speed. This system can be used to speed up research and development, enhance data analysis, and enable various applications. By cooperating, the IT and Research departments can ensure that the HPC system is well maintained and used to its maximum capacity, serving UT Health research community.

103000+

Average compute hours per month

101500+

Average Jobs per month

  • Eligibility

    Easy access 

    • OpenHPC: a Linux Foundation project that offers software and tools for HPC environments. It strives to provide a reliable and standard software stack for HPC systems, simplifying the installation and operation of HPC clusters. 

    • Coldfront: an open-source system for HPC centers to manage their resources. It lets HPC centers regulate who can access their resources, monitor usage, and make reports on resource use. Coldfront has a web-based interface for users to ask for and see their resource allocations and usage. It also gives administrators tools to manage users, resources, and allocations. 

    • OnDemand: a web browser tool that lets users do different tasks on HPC systems, such as run and check jobs, move files, and use interactive applications. Open OnDemand aims to make HPC resources easier and more accessible, especially for researchers and scientists who may not know how to use command-line interfaces.


     Research IT support & training by ESO

    • Our Research IT staff will support, maintain and train research staff fully.

  • Benefits of Membership?

    Only UT Health San Antonio faculty can buy condo nodes or hotel plans on SCC. They need to submit a Service Request (SR) with a Project ID for the funds transfer. Condo model contracts are five years. 

    Cost-effective: By pooling resources and sharing the cost of the HPC system, users can access high-performance computing resources at a lower cost than if they were to set up their own infrastructure.

    Shared maintenance and support: The maintenance and support of the HPC system are shared among the users, reducing the burden on individual users.

    Scalability: Users can easily scale their computing resources by purchasing additional nodes within the system as their needs grow.

    Collaboration: The shared nature of the condo model HPC system facilitates collaboration between users, enabling them to work together on complex problems and share data and resources.

  • How can researcher help?
    • Allocating a portion of their budget: Research departments can allocate a portion of their annual budget towards the funding of an HPC system.

    • Applying for grants: Research departments can apply for grants from government agencies or private organizations that support the advancement of research and technology.

    • Collaborating with other departments: Research departments can collaborate with other departments within their organization to pool resources and share the cost of funding an HPC system.

Virtual Desktop Services

Azure

Azure Virtual Desktop (AVD) service provides remote access to users from anywhere and from any device to a full Windows desktop, which is hosted in the Microsoft Azure Cloud.

More Information

  • Service Options
    • RDSH Published Application - Request access to University Published Applications like Chrome, FireFox, and Edge or request a client-based application be published using RDSH technology (minimum 50 concurrent licenses)
    • Virtual Desktop Lite - Random/Non-Persistent – Virtual Desktop Lite service provides users with remote access from anywhere on any device to a full Windows desktop hosted in the data center. 2 vCPU / 6 GB RAM / 5 GB Profile Space
    • Virtual Desktop Standard - Static/Persistent – Virtual Desktop Advanced service provides users with remote access from anywhere on any device to a full customizable Windows desktop hosted in the data center. 2 vCPU / 6 GB RAM / 20 GB Profile Space / 20 Hard Disk for Applications
    • Virtual Desktop Power User - Static/Persistent - Virtual Desktop Power User service provides users with remote access from anywhere on any device to a full customizable Windows desktop hosted in the data center. 2 vCPU / 8 GB RAM / Localized Profile Space / 100 GB Hard Disk for Applications
  • Benefits and Features
    • Flexibility - Run legacy and homegrown applications from any device; run Windows-only applications on non-Windows devices; specialized software can be added, updated, or removed quickly
    • Mobility - Access from anywhere, any device
    • Availability - Redundant power, network, and systems datacenter
    • Security - Allow secure access to sensitive data; remove content from the endpoint device; reduce threats from theft or compromise from the client device
    • Manageability - Simplify desktop support and management; manage automated upgrades, patches, and version control
    • Extend the life of older hardware - Users don't need to buy their own copies of software to be installed on their devices
  • Desktop Protection

    Type 2 virtual desktop backups are automated. Backups are kept 3 days, and restoring a desktop from backup returns the desktop to any previous state in the past 3 days. Desktops that have been cancelled and deleted cannot be restored from backup.

  • Disaster Recovery (DR) and Redundancy

    Redundant VDI is available at the secondary data center site to facilitate virtual desktop provisioning in the event of an outage of the primary data center. New DR-ready virtual desktops can be deployed onto end-user devices so users can continue working, keeping the core business alive.

Computing Services

adc

UT Health IT System Operations and Administration offers managed servers, hosted server environment, virtual servers, physical servers, and server software installation services.