Skip to content
Classify GPU's based on their benchmark score in order to provide an adaptive experience.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data add analytics data and parser back May 22, 2019
dist
scripts
src move into internal to avoid confusion May 22, 2019
test split all into modules May 22, 2019
.editorconfig force 2 spaces Aug 30, 2018
.gitignore
.npmrc major refactor May 22, 2019
LICENSE major refactor May 22, 2019
README.md Update README.md May 23, 2019
index.html add back index.html to root for live demo May 22, 2019
package.json make sure to force a major version bump May 22, 2019
rollup.config.ts
tsconfig.json major refactor May 22, 2019
tslint.json major refactor May 22, 2019
yarn.lock

README.md

Detect GPU

npm version

Classify GPU's based on their benchmark score in order to provide an adaptive experience.

Demo

Live demo

Installation

Make sure you have Node.js installed.

 $ npm install detect-gpu

Usage

detect-gpu uses benchmarking scores in order to determine what tier should be assigned to the user's GPU. If no WebGLContext can be created or the GPU is blacklisted TIER_0 is assigned. One should provide a HTML fallback page that a user should be redirected to.

By default are all GPU's that have met these preconditions classified as TIER_1. Using user agent detection a distinction is made between mobile (mobile and tablet) prefixed using GPU_MOBILE_ and desktop devices prefixed with GPU_DESKTOP_. Both are then followed by TIER_N where N is the tier number.

In order to keep up to date with new GPU's coming out detect-gpu splits the benchmarking scores in 4 tiers based on rough estimates of the market share.

By default detect-gpu assumes 10% of the lowest scores to be insufficient to run the experience and is assigned TIER_0. 40% of the GPU's are considered good enough to run the experience and are assigned TIER_1. 30% of the GPU's are considered powerful and are classified as TIER_2. The last 20% of the GPU's are considered to be very powerful, are assigned TIER_3, and can run the experience with all bells and whistles.

You can tweak these percentages when registering the application as shown below:

import { getGPUTier } from 'detect-gpu';

const GPUTier = getGPUTier({
  mobileBenchmarkPercentages: [10, 40, 30, 20], // (Default) [TIER_0, TIER_1, TIER_2, TIER_3]
  desktopBenchmarkPercentages: [10, 40, 30, 20], // (Default) [TIER_0, TIER_1, TIER_2, TIER_3]
  forceRendererString: 'Apple A11 GPU', // (Development) Force a certain renderer string
  forceMobile: true, // (Development) Force the use of mobile benchmarking scores
});

Development

$ yarn start

$ yarn serve

$ yarn lint

$ yarn test

$ yarn build

$ yarn parse-analytics

$ yarn update-benchmarks

Licence

My work is released under the MIT license.

detect-gpu uses both mobile and desktop benchmarking scores from https://www.notebookcheck.net/.

The unmasked renderers have been gathered using the analytics script that one can find in scripts/analytics_embed.js.

You can’t perform that action at this time.