pip install alphai
and start up your remote GPU servers, prototype AI and LLMs, benchmark code, and analyze profiling data.
Use AlphAI to start up GPU servers with a Jupyter Lab IDE with any number of runtime environments that support PyTorch, Hugging Face, and more.
from alphai import AlphAI
api_key = ***************
aai = AlphAI(
api_key=os.environ.get("ALPHAI_API_KEY")
)
# Start remote Jupyter Lab servers
aai.start_server()
# Upload to your server's file system
aai.upload("./main.py")
# Start python kernel and run code remotely
code = "print('Hello world!')"
aai.run_code(code)
Analyze and profile tensor traces on your deep learning runs in your GPU servers.
from alphai import AlphAI
import torch
import math
aai = AlphAI()
model = torch.nn.Sequential(
torch.nn.Linear(3, 1),
torch.nn.Flatten(0, 1)
).to("cuda")
x = torch.linspace(-math.pi, math.pi, 2000)
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p).to("cuda")
aai.start()
model(xx)
aai.stop()
Benchmark and time your code and functions.
from alphai import AlphAI
prompt = "Hello there!"
aai.start_timer()
aai.generate(prompt)
aai.stop_timer()
aai.benchmark(aai.generate, prompt, num_iter = 5)