The Tesla P40 is purpose-built to deliver maximum throughput for deep learning workloads. 140X HIGHER THROUGHPUT TO KEEP UP WITH EXPLODING DATA The Tesla P40 is powered by the new Pascal architecture and delivers over 47 TOPS of deep learning inference performance. A single server with 8 Tesla P40s can replace up to 140 CPU-only servers for deep learning workloads, resulting in substantially higher throughput with lower acquisition cost. REAL-TIME INFERENCE The Tesla P40 delivers up to 30X faster inference performance with INT8 operations for real-time responsiveness for even the most complex deep learning models. SIMPLIFIED OPERATIONS WITH A SINGLE TRAINING AND INFERENCE PLATFORM Today, deep learning models are trained on GPU servers but deployed in CPU servers for inference. The Tesla P40 offers a drastically simplified workflow, so organizations can use the same servers to iterate and deploy.
Price Archive shows prices from various stores, lets you see history and find the cheapest. There is no actual sale on the website. For all support, inquiry and suggestion messagescommunication@pricearchive.us