Research Computing
Texas A&M provides shared High Performance Computing (HPC) clusters for faculty and student research. These resources are managed by the Academic Operations Linux team and are available to researchers across Statistics, Engineering, and Arts & Sciences.
This documentation is maintained by the Academic Operations Linux team. For help, contact artsci-help@tamu.edu or open a ticket in the TS-AO-Linux-Infrastructure queue in TDX.
Getting started
New to HPC? Start here:
- Connecting via SSH — set up MobaXTerm (Windows), XQuartz (Mac), or a Linux terminal
- Submitting Jobs with Slurm — write batch scripts, request resources, run interactive sessions
Available clusters
| Cluster | Department | Login Address | Documentation |
|---|---|---|---|
| Arseven | Statistics | arseven.stat.tamu.edu | Arseven docs |
| Orchard | NUEN Engineering | orchard.engr.tamu.edu | Orchard docs |
| Olympus | ECEN Engineering | olympus.ece.tamu.edu | Docs in progress |
| Atlas | General Engineering | atlas.engr.tamu.edu | Docs in progress |
Each cluster has its own account request process, hardware profile, SLURM partitions, and installed software. Always refer to the cluster-specific docs before your first job.
Research Computing topics
- Connecting via SSH — Windows (MobaXTerm), Mac (Terminal + XQuartz), Linux (openssh)
- Submitting Jobs with Slurm — sbatch scripts, srun interactive sessions, MPI/Python/MATLAB/R examples
Related
- Linux Endpoint Documentation — Linux workstation and laptop guides
- Infrastructure — Cloud and on-prem infrastructure documentation