Edition for Web Developers — Last Updated 13 November 2024
canvas
要素Path2D
objectsImageBitmap
rendering contextOffscreenCanvas
interfacecanvas
要素Support in all current engines.
a
要素、usemap
属性をもつimg
要素、button
要素、type
属性がCheckboxまたはRadio Buttonの状態にあるinput
要素、ボタンであるinput
要素、multiple
属性または表示サイズが1より大きいselect
要素を除いて、インタラクティブコンテンツの子孫をもたない。width
— 横の次元height
— 縦の次元HTMLCanvasElement
を使用する。canvas
要素は、グラフ、ゲームグラフィック、芸術、またはその他の視覚的な画像をその場でレンダリングするために使用できる、解像度に依存するビットマップキャンバスを伴うスクリプトを提供する。
より適切な要素が利用可能である場合、著者は文書でcanvas
要素を使うべきでない。たとえば、見出し、ページをレンダリングするcanvas
要素を使用することは不適当である:見出しの望ましいプレゼンテーショングラフィカルが強烈である場合、それは適切な要素(典型的には h1
)を使用してマークアップし、CSSを使用していて、shadow treesのような技術をサポートするスタイリングする必要がある。
著者がcanvas
要素を使用する場合、著者はまた、ユーザーに提示するとき、canvas
のビットマップとして同じ機能や目的を基本的に伝えるコンテンツを提供しなければならない。このコンテンツはcanvas
要素のコンテンツとして置かれてもよい。canvas
要素のコンテンツは、もしあれば、その要素のフォールバックコンテンツである。
対話的な視覚メディアにおいて、canvas
要素に対してスクリプティングが有効である場合、かつcanvas
要素に対するサポートが有効である場合、canvas
要素は、動的に作成された画像ジから成るエンベディッドコンテンツを表す。
静的で、非対話的な視覚メディアにおいて、canvas
要素が以前にレンダリングコンテキストに関連付けられている場合(たとえば、ページが対話的な視覚メディアで表示され、現在描かれている場合、またはページレイアウトプロセス中に実行された一部のスクリプトが要素で描かれた場合)、canvas
要素は、現在のビットマップとサイズをもつエンベディッドコンテンツを表す。そうでなければ、要素は代わりにそのフォールバックコンテンツを表す。
非視覚系メディアにおいて、かつ視覚メディアでcanvas
要素に対してスクリプトが無効である場合、またはcanvas
要素のサポートが無効である場合、canvas
要素は代わりにフォールバックコンテンツを表す。
canvas
要素がエンベディッドコンテンツを表す場合、ユーザーはcanvas
要素の子孫(フォールバックコンテンツ内)にフォーカスできる。要素がフォーカスされている場合、(要素自体が表示されていなくても)キーボード操作イベントの対象となる。これは、著者が対話的なキャンバスをキーボードアクセシブルにできる:著者は、フォールバックコンテンツ内のフォーカス可能な領域に対話的な領域の1対1対応を持つべきである。(フォーカスは、マウス操作イベントに影響しない。)[DOMEVENTS]
最も近いcanvas
要素の祖先がレンダリングされ、エンベティッドコンテンツを表す要素は、関連するcanvasのフォールバック コンテンツとして使用されている要素である。
canvas
要素は、要素ビットマップの大きさを制御するwidth
とheight
の2つの属性を持つ。指定される場合、これらの属性は妥当な非負整数である値を持たなければならない。width
属性はデフォルトで300に、height
属性はデフォルトで150に設定される。
width
またはheight
属性の値を設定するときに、canvas
要素のコンテキストモードがプレースホルダーに設定される場合、ユーザーエージェントは"InvalidStateError
" DOMException
を投げ、属性の値を変更しないでおかなければならない。
要素がエンベディッドコンテンツを表す場合、canvas
要素の自然次元は、その要素のビットマップの次元に等しい。
ユーザーエージェントは、canvas
とレンダリングコンテキストのビットマップに対する座標空間単位あたりの画像データの1画素からなる平方画素密度を使用しなければならない。
canvas
要素は、スタイルシートによって任意のサイズにでき、そのビットマップは'object-fit' CSSプロパティの対象となる。
context = canvas.getContext(contextId [, options ])
キャンバス上に描画するためのAPIを公開するオブジェクトを返す。contextIdは、望ましいAPIを指定する:"2d
"、"bitmaprenderer
"、"webgl
"、"webgl2
"、または"webgpu
"。optionsはそのAPIによって処理される。
この仕様は、以下に"2d
"および"bitmaprenderer
"のコンテキストを定義する。WebGL仕様は、"webgl
"および"webgl2
"のコンテキストを定義する。WebGPU defines the "webgpu
" context. [WEBGL] [WEBGPU]
contextId がサポートされていない場合、またはcanvasがすでに別のコンテキストタイプで初期化されている場合(たとえば"webgl
"コンテキストを取得した後に"2d
"コンテキストを取得しようとしている場合)、nullを返す。
url = canvas.toDataURL([ type [, quality ] ])
キャンバスで画像に対するdata:
URLを返す。
最初の引数が与えられた場合、返される画像の型(たとえば、PNGまたはJPEG)を制御する。デフォルトは"image/png
"である。指定された型がサポートされない場合、そのタイプも使用される。2番目の引数は、typeが可変品質をサポートする画像フォーマット(例えば "image/jpeg
"など)の場合に適用され、結果として得られる画像の希望する品質レベルを示す0.0から1.0の範囲内の数値である。
"image/png
"以外の型を使用しようとする場合、著者は、実際に返された文字列が正確に"data:image/png,
"または"data:image/png;
"のいずれかの文字列で始まるかどうかをチェックすることで、画像が要求されたフォーマットで返されたかどうかを確認できる。その場合、画像はPNGであり、したがって要求された型はサポートされない。(キャンバスが高さや幅のいずれかを持たない場合、このような例外の1つであり、結果は単に"data:,
"であるかもしれない)。
canvas.toBlob(callback [, type [, quality ] ])
キャンバス内の画像を含むファイルを表すBlob
オブジェクトを作成し、そのオブジェクトへのハンドルとともにコールバックを呼び出す。
2つ目の引数が与えられた場合、返される画像の型(たとえば、PNGまたはJPEG)を制御する。デフォルトは"image/png
"である。指定された型がサポートされない場合、そのタイプも使用される。3番目の引数は、typeが可変品質をサポートする画像フォーマット(例えば "image/jpeg
"など)の場合に適用され、結果として得られる画像の希望する品質レベルを示す0.0から1.0の範囲内の数値である。
canvas.transferControlToOffscreen()
canvas
要素をプレースホルダーとして使用する、新しく作成されたOffscreenCanvas
オブジェクトを返す。canvas
要素がOffscreenCanvas
オブジェクトのプレースホルダーになると、その自然サイズは変更できなくなり、レンダリングコンテキストを持つこともできなくなる。プレースホルダーキャンバスのコンテンツは、OffscreenCanvasの
Support in all current engines.
context = canvas.getContext('2d' [, { [ alpha: true ] [, desynchronized: false ] [, colorSpace: 'srgb'] [, willReadFrequently: false ]} ])
Returns a CanvasRenderingContext2D
object that is permanently bound to a particular canvas
element.
If the alpha
member is false, then the context is forced to always be opaque.
If the desynchronized
member is true, then the context might be desynchronized.
The colorSpace
member specifies the color space of the rendering context.
If the willReadFrequently
member is true, then the context is marked for readback optimization.
context.canvas
Returns the canvas
element.
attributes = context.getContextAttributes()
Returns an object whose:
alpha
member is true if the context has an alpha channel, or false if it was forced to be opaque.desynchronized
member is true if the context can be desynchronized.colorSpace
member is a string indicating the context's color space.willReadFrequently
member is true if the context is marked for readback optimization.The PredefinedColorSpace
enumeration is used to specify the color space of the canvas's backing store.
The "srgb
" value indicates the 'srgb' color space.
The "display-p3
" value indicates the 'display-p3' color space.
The algorithm for converting between color spaces can be found in the Converting Colors section of CSS Color. [CSSCOLOR]
The CanvasFillRule
enumeration is used to select the fill rule algorithm by which to determine if a point is inside or outside a path.
The value "nonzero
" value indicates the nonzero winding rule, wherein a point is considered to be outside a shape if the number of times a half-infinite straight line drawn from that point crosses the shape's path going in one direction is equal to the number of times it crosses the path going in the other direction.
The "evenodd
" value indicates the even-odd rule, wherein a point is considered to be outside a shape if the number of times a half-infinite straight line drawn from that point crosses the shape's path is even.
If a point is not outside a shape, it is inside the shape.
The ImageSmoothingQuality
enumeration is used to express a preference for the interpolation quality to use when smoothing images.
The "low
" value indicates a preference for a low level of image interpolation quality. Low-quality image interpolation may be more computationally efficient than higher settings.
The "medium
" value indicates a preference for a medium level of image interpolation quality.
The "high
" value indicates a preference for a high level of image interpolation quality. High-quality image interpolation may be more computationally expensive than lower settings.
Bilinear scaling is an example of a relatively fast, lower-quality image-smoothing algorithm. Bicubic or Lanczos scaling are examples of image-smoothing algorithms that produce higher-quality output. This specification does not mandate that specific interpolation algorithms be used.
The output bitmap, when it is not directly displayed by the user agent, implementations can, instead of updating this bitmap, merely remember the sequence of drawing operations that have been applied to it until such time as the bitmap's actual data is needed (for example because of a call to drawImage()
, or the createImageBitmap()
factory method). In many cases, this will be more memory efficient.
The bitmap of a canvas
element is the one bitmap that's pretty much always going to be needed in practice. The output bitmap of a rendering context, when it has one, is always just an alias to a canvas
element's bitmap.
Additional bitmaps are sometimes needed, e.g. to enable fast drawing when the canvas is being painted at a different size than its natural size, or to enable double buffering so that graphics updates, like page scrolling for example, can be processed concurrently while canvas draw commands are being executed.
Objects that implement the CanvasState
interface maintain a stack of drawing states. Drawing states consist of:
The current transformation matrix.
The current clipping region.
The current letter spacing, word spacing, fill style, stroke style, filter, global alpha, compositing and blending operator, and shadow color.
The current values of the following attributes: lineWidth
, lineCap
, lineJoin
, miterLimit
, lineDashOffset
, shadowOffsetX
, shadowOffsetY
, shadowBlur
, font
, textAlign
, textBaseline
, direction
, fontKerning
, fontStretch
, fontVariantCaps
, textRendering
, imageSmoothingEnabled
, imageSmoothingQuality
.
The current dash list.
The rendering context's bitmaps are not part of the drawing state, as they depend on whether and how the rendering context is bound to a canvas
element.
Objects that implement the CanvasState
mixin have a context lost boolean, that is initialized to false when the object is created. The context lost value is updated in the context lost steps.
context.save()
Pushes the current state onto the stack.
context.restore()
Pops the top state on the stack, restoring the context to that state.
context.reset()
CanvasRenderingContext2D/reset
OffscreenCanvasRenderingContext2D#canvasrenderingcontext2d.reset
Resets the rendering context, which includes the backing buffer, the drawing state stack, path, and styles.
context.isContextLost()
CanvasRenderingContext2D/isContextLost
Support in one engine only.
Returns true if the rendering context was lost. Context loss can occur due to driver crashes, running out of memory, etc. In these cases, the canvas loses its backing storage and takes steps to reset the rendering context to its default state.
context.lineWidth [ = value ]
styles.lineWidth [ = value ]
Returns the current line width.
Can be set, to change the line width. Values that are not finite values greater than zero are ignored.
context.lineCap [ = value ]
styles.lineCap [ = value ]
Returns the current line cap style.
Can be set, to change the line cap style.
The possible line cap styles are "butt
", "round
", and "square
". Other values are ignored.
context.lineJoin [ = value ]
styles.lineJoin [ = value ]
Returns the current line join style.
Can be set, to change the line join style.
The possible line join styles are "bevel
", "round
", and "miter
". Other values are ignored.
context.miterLimit [ = value ]
styles.miterLimit [ = value ]
Returns the current miter limit ratio.
Can be set, to change the miter limit ratio. Values that are not finite values greater than zero are ignored.
context.setLineDash(segments)
styles.setLineDash(segments)
Sets the current line dash pattern (as used when stroking). The argument is a list of distances for which to alternately have the line on and the line off.
segments = context.getLineDash()
segments = styles.getLineDash()
Returns a copy of the current line dash pattern. The array returned will always have an even number of entries (i.e. the pattern is normalized).
context.lineDashOffset
styles.lineDashOffset
Returns the phase offset (in the same units as the line dash pattern).
Can be set, to change the phase offset. Values that are not finite values are ignored.
context.font [ = value ]
styles.font [ = value ]
Returns the current font settings.
Can be set, to change the font. The syntax is the same as for the CSS 'font' property; values that cannot be parsed as CSS font values are ignored.
Relative keywords and lengths are computed relative to the font of the canvas
element.
context.textAlign [ = value ]
styles.textAlign [ = value ]
Returns the current text alignment settings.
Can be set, to change the alignment. The possible values are and their meanings are given below. Other values are ignored. The default is "start
".
context.textBaseline [ = value ]
styles.textBaseline [ = value ]
Returns the current baseline alignment settings.
Can be set, to change the baseline alignment. The possible values and their meanings are given below. Other values are ignored. The default is "alphabetic
".
context.direction [ = value ]
styles.direction [ = value ]
Returns the current directionality.
Can be set, to change the directionality. The possible values and their meanings are given below. Other values are ignored. The default is "inherit
".
context.letterSpacing [ = value ]
CanvasRenderingContext2D/letterSpacing
Support in one engine only.
styles.letterSpacing [ = value ]
Returns the current spacing between characters in the text.
Can be set, to change spacing between characters. Values that cannot be parsed as a CSS <length> are ignored. The default is "0px
".
context.fontKerning [ = value ]
CanvasRenderingContext2D/fontKerning
styles.fontKerning [ = value ]
Returns the current font kerning settings.
Can be set, to change the font kerning. The possible values and their meanings are given below. Other values are ignored. The default is "auto
".
context.fontStretch [ = value ]
CanvasRenderingContext2D/fontStretch
Support in one engine only.
styles.fontStretch [ = value ]
Returns the current font stretch settings.
Can be set, to change the font stretch. The possible values and their meanings are given below. Other values are ignored. The default is "normal
".
context.fontVariantCaps [ = value ]
CanvasRenderingContext2D/fontVariantCaps
Support in one engine only.
styles.fontVariantCaps [ = value ]
Returns the current font variant caps settings.
Can be set, to change the font variant caps. The possible values and their meanings are given below. Other values are ignored. The default is "normal
".
context.textRendering [ = value ]
CanvasRenderingContext2D/textRendering
Support in one engine only.
styles.textRendering [ = value ]
Returns the current text rendering settings.
Can be set, to change the text rendering. The possible values and their meanings are given below. Other values are ignored. The default is "auto
".
context.wordSpacing [ = value ]
CanvasRenderingContext2D/wordSpacing
Support in one engine only.
styles.wordSpacing [ = value ]
Returns the current spacing between words in the text.
Can be set, to change spacing between words. Values that cannot be parsed as a CSS <length> are ignored. The default is "0px
".
The textAlign
attribute's allowed keywords are as follows:
start
Align to the start edge of the text (left side in left-to-right text, right side in right-to-left text).
end
Align to the end edge of the text (right side in left-to-right text, left side in right-to-left text).
left
Align to the left.
right
Align to the right.
center
Align to the center.
The textBaseline
attribute's allowed keywords correspond to alignment points in the font:
The keywords map to these alignment points as follows:
top
hanging
middle
alphabetic
ideographic
bottom
The direction
attribute's allowed keywords are as follows:
ltr
Treat input to the text preparation algorithm as left-to-right text.
rtl
Treat input to the text preparation algorithm as right-to-left text.
継承
Default to the directionality of the canvas
element or Document
as appropriate.
The fontKerning
attribute's allowed keywords are as follows:
auto
Kerning is applied at the discretion of the user agent.
normal
Kerning is applied.
none
Kerning is not applied.
The fontStretch
attribute's allowed keywords are as follows:
ultra-condensed
Same as CSS 'font-stretch' 'ultra-condensed' setting.
extra-condensed
Same as CSS 'font-stretch' 'extra-condensed' setting.
condensed
Same as CSS 'font-stretch' 'condensed' setting.
semi-condensed
Same as CSS 'font-stretch' 'semi-condensed' setting.
normal
The default setting, where width of the glyphs is at 100%.
semi-expanded
Same as CSS 'font-stretch' 'semi-expanded' setting.
expanded
Same as CSS 'font-stretch' 'expanded' setting.
extra-expanded
Same as CSS 'font-stretch' 'extra-expanded' setting.
ultra-expanded
Same as CSS 'font-stretch' 'ultra-expanded' setting.
The fontVariantCaps
attribute's allowed keywords are as follows:
normal
None of the features listed below are enabled.
small-caps
Same as CSS 'font-variant-caps' 'small-caps' setting.
all-small-caps
Same as CSS 'font-variant-caps' 'all-small-caps' setting.
petite-caps
Same as CSS 'font-variant-caps' 'petite-caps' setting.
all-petite-caps
Same as CSS 'font-variant-caps' 'all-petite-caps' setting.
unicase
Same as CSS 'font-variant-caps' 'unicase' setting.
titling-caps
Same as CSS 'font-variant-caps' 'titling-caps' setting.
The textRendering
attribute's allowed keywords are as follows:
auto
Same as 'auto' in SVG text-rendering property.
optimizeSpeed
Same as 'optimizeSpeed' in SVG text-rendering property.
optimizeLegibility
Same as 'optimizeLegibility' in SVG text-rendering property.
geometricPrecision
Same as 'geometricPrecision' in SVG text-rendering property.
The text preparation algorithm is as follows. It takes as input a string text, a CanvasTextDrawingStyles
object target, and an optional length maxWidth. It returns an array of glyph shapes, each positioned on a common coordinate space, a physical alignment whose value is one of left, right, and center, and an inline box. (Most callers of this algorithm ignore the physical alignment and the inline box.)
If maxWidth was provided but is less than or equal to zero or equal to NaN, then return an empty array.
Replace all ASCII whitespace in text with U+0020 SPACE characters.
Let font be the current font of target, as given by that object's font
attribute.
Apply the appropriate step from the following list to determine the value of direction:
direction
attribute has the value "ltr
"direction
attribute has the value "rtl
"Document
with a non-null document elementForm a hypothetical infinitely-wide CSS line box containing a single inline box containing the text text, with its CSS properties set as follows:
プロパティ | Source |
---|---|
'direction' | direction |
'font' | font |
'font-kerning' | target's fontKerning |
'font-stretch' | target's fontStretch |
'font-variant-caps' | target's fontVariantCaps |
'letter-spacing' | target's letter spacing |
SVG text-rendering | target's textRendering |
'white-space' | 'pre' |
'word-spacing' | target's word spacing |
and with all other properties set to their initial values.
If maxWidth was provided and the hypothetical width of the inline box in the hypothetical line box is greater than maxWidth CSS pixels, then change font to have a more condensed font (if one is available or if a reasonably readable one can be synthesized by applying a horizontal scale factor to the font) or a smaller font, and return to the previous step.
The anchor point is a point on the inline box, and the physical alignment is one of the values left, right, and center. These variables are determined by the textAlign
and textBaseline
values as follows:
Horizontal position:
textAlign
is left
textAlign
is start
and direction is 'ltr'textAlign
is end
and direction is 'rtl'textAlign
is right
textAlign
is end
and direction is 'ltr'textAlign
is start
and direction is 'rtl'textAlign
is center
Vertical position:
textBaseline
is top
textBaseline
is hanging
textBaseline
is middle
textBaseline
is alphabetic
textBaseline
is ideographic
textBaseline
is bottom
Let result be an array constructed by iterating over each glyph in the inline box from left to right (if any), adding to the array, for each glyph, the shape of the glyph as it is in the inline box, positioned on a coordinate space using CSS pixels with its origin at the anchor point.
Return result, physical alignment, and the inline box.
Objects that implement the CanvasPath
interface have a path. A path has a list of zero or more subpaths. Each subpath consists of a list of one or more points, connected by straight or curved line segments, and a flag indicating whether the subpath is closed or not. A closed subpath is one where the last point of the subpath is connected to the first point of the subpath by a straight line. Subpaths with only one point are ignored when painting the path.
Paths have a need new subpath flag. When this flag is set, certain APIs create a new subpath rather than extending the previous one. When a path is created, its need new subpath flag must be set.
When an object implementing the CanvasPath
interface is created, its path must be initialized to zero subpaths.
context.moveTo(x, y)
path.moveTo(x, y)
Creates a new subpath with the given point.
context.closePath()
path.closePath()
Marks the current subpath as closed, and starts a new subpath with a point the same as the start and end of the newly closed subpath.
context.lineTo(x, y)
path.lineTo(x, y)
Adds the given point to the current subpath, connected to the previous one by a straight line.
context.quadraticCurveTo(cpx, cpy, x, y)
path.quadraticCurveTo(cpx, cpy, x, y)
Adds the given point to the current subpath, connected to the previous one by a quadratic Bézier curve with the given control point.
context.bezierCurveTo(cp1x, cp1y, cp2x, cp2y, x, y)
path.bezierCurveTo(cp1x, cp1y, cp2x, cp2y, x, y)
Adds the given point to the current subpath, connected to the previous one by a cubic Bézier curve with the given control points.
context.arcTo(x1, y1, x2, y2, radius)
path.arcTo(x1, y1, x2, y2, radius)
Adds an arc with the given control points and radius to the current subpath, connected to the previous point by a straight line.
Throws an "IndexSizeError
" DOMException
if the given radius is negative.
context.arc(x, y, radius, startAngle, endAngle [, counterclockwise ])
path.arc(x, y, radius, startAngle, endAngle [, counterclockwise ])
Adds points to the subpath such that the arc described by the circumference of the circle described by the arguments, starting at the given start angle and ending at the given end angle, going in the given direction (defaulting to clockwise), is added to the path, connected to the previous point by a straight line.
Throws an "IndexSizeError
" DOMException
if the given radius is negative.
context.ellipse(x, y, radiusX, radiusY, rotation, startAngle, endAngle [, counterclockwise])
path.ellipse(x, y, radiusX, radiusY, rotation, startAngle, endAngle [, counterclockwise])
Adds points to the subpath such that the arc described by the circumference of the ellipse described by the arguments, starting at the given start angle and ending at the given end angle, going in the given direction (defaulting to clockwise), is added to the path, connected to the previous point by a straight line.
Throws an "IndexSizeError
" DOMException
if the given radius is negative.
context.rect(x, y, w, h)
path.rect(x, y, w, h)
Adds a new closed subpath to the path, representing the given rectangle.
context.roundRect(x, y, w, h, radii)
CanvasRenderingContext2D/roundRect
Support in all current engines.
path.roundRect(x, y, w, h, radii)
Adds a new closed subpath to the path representing the given rounded rectangle. radii is either a list of radii or a single radius representing the corners of the rectangle in pixels. If a list is provided, the number and order of these radii function in the same way as the CSS 'border-radius' property. A single radius behaves the same way as a list with a single element.
If w and h are both greater than or equal to 0, or if both are smaller than 0, then the path is drawn clockwise. Otherwise, it is drawn counterclockwise.
When w is negative, the rounded rectangle is flipped horizontally, which means that the radius values that normally apply to the left corners are used on the right and vice versa. Similarly, when h is negative, the rounded rect is flipped vertically.
When a value r in radii is a number, the corresponding corner(s) are drawn as circular arcs of radius r.
When a value r in radii is an object with { x, y }
properties, the corresponding corner(s) are drawn as elliptical arcs whose x and y radii are equal to r.x and r.y, respectively.
When the sum of the radii of two corners of the same edge is greater than the length of the edge, all the radii of the rounded rectangle are scaled by a factor of length / (r1 + r2). If multiple edges have this property, the scale factor of the edge with the smallest scale factor is used. This is consistent with CSS behavior.
Throws a RangeError
if radii is a list whose size is not one, two, three, or four.
Throws a RangeError
if a value in radii is a negative number, or is an { x, y }
object whose x
or y
properties are negative numbers.
Path2D
objectsSupport in all current engines.
Path2D
objects can be used to declare paths that are then later used on objects implementing the CanvasDrawPath
interface. In addition to many of the APIs described in earlier sections, Path2D
objects have methods to combine paths, and to add text to paths.
path = new Path2D()
Creates a new empty Path2D
object.
path = new Path2D(path)
When path is a Path2D
object, returns a copy.
When path is a string, creates the path described by the argument, interpreted as SVG path data. [SVG]
path.addPath(path [, transform ])
Adds to the path the path given by the argument.
Objects that implement the CanvasTransform
interface have a current transformation matrix, as well as methods (described in this section) to manipulate it. When an object implementing the CanvasTransform
interface is created, its transformation matrix must be initialized to the identity matrix.
The current transformation matrix is applied to coordinates when creating the current default path, and when painting text, shapes, and Path2D
objects, on objects implementing the CanvasTransform
interface.
context.scale(x, y)
Changes the current transformation matrix to apply a scaling transformation with the given characteristics.
context.rotate(angle)
Changes the current transformation matrix to apply a rotation transformation with the given characteristics. The angle is in radians.
context.translate(x, y)
Changes the current transformation matrix to apply a translation transformation with the given characteristics.
context.transform(a, b, c, d, e, f)
Changes the current transformation matrix to apply the matrix given by the arguments as described below.
matrix = context.getTransform()
Returns a copy of the current transformation matrix, as a newly created DOMMatrix
object.
context.setTransform(a, b, c, d, e, f)
Changes the current transformation matrix to the matrix given by the arguments as described below.
context.setTransform(transform)
Changes the current transformation matrix to the matrix represented by the passed DOMMatrix2DInit
dictionary.
context.resetTransform()
Changes the current transformation matrix to the identity matrix.
The arguments a, b, c, d, e, and f are sometimes called m11, m12, m21, m22, dx, and dy or m11, m21, m12, m22, dx, and dy. Care ought to be taken in particular with the order of the second and third arguments (b and c) as their order varies from API to API and APIs sometimes use the notation m12/m21 and sometimes m21/m12 for those positions.
Given a matrix of the form created by the transform()
and setTransform()
methods, i.e.,
a | c | e |
b | d | f |
0 | 0 | 1 |
the resulting transformed coordinates after transform matrix multiplication will be
xnew = a x + c y + e
ynew = b x + d y + f
Some methods on the CanvasDrawImage
and CanvasFillStrokeStyles
interfaces take the union type CanvasImageSource
as an argument.
This union type allows objects implementing any of the following interfaces to be used as image sources:
HTMLOrSVGImageElement
(img
or SVG image
elements)HTMLVideoElement
(video
elements)HTMLCanvasElement
(canvas
elements)OffscreenCanvas
ImageBitmap
VideoFrame
Although not formally specified as such, SVG image
elements are expected to be implemented nearly identical to img
elements. That is, SVG image
elements share the fundamental concepts and features of img
elements.
The ImageBitmap
interface can be created from a number of other image-representing types, including ImageData
.
To check the usability of the image argument, where image is a CanvasImageSource
object, run these steps:
Switch on image:
HTMLOrSVGImageElement
If image's current request's state is broken, then throw an "InvalidStateError
" DOMException
.
If image is not fully decodable, then return bad.
If image has a natural width or natural height (or both) equal to zero, then return bad.
HTMLVideoElement
If image's readyState
attribute is either HAVE_NOTHING
or HAVE_METADATA
, then return bad.
HTMLCanvasElement
OffscreenCanvas
If image has either a horizontal dimension or a vertical dimension equal to zero, then throw an "InvalidStateError
" DOMException
.
ImageBitmap
VideoFrame
If image's [[Detached]] internal slot value is set to true, then throw an "InvalidStateError
" DOMException
.
Return good.
When a CanvasImageSource
object represents an HTMLOrSVGImageElement
, the element's image must be used as the source image.
Specifically, when a CanvasImageSource
object represents an animated image in an HTMLOrSVGImageElement
, the user agent must use the default image of the animation (the one that the format defines is to be used when animation is not supported or is disabled), or, if there is no such image, the first frame of the animation, when rendering the image for CanvasRenderingContext2D
APIs.
When a CanvasImageSource
object represents an HTMLVideoElement
, then the frame at the current playback position when the method with the argument is invoked must be used as the source image when rendering the image for CanvasRenderingContext2D
APIs, and the source image's dimensions must be the natural width and natural height of the media resource (i.e., after any aspect-ratio correction has been applied).
When a CanvasImageSource
object represents an HTMLCanvasElement
, the element's bitmap must be used as the source image.
When a CanvasImageSource
object represents an element that is being rendered and that element has been resized, the original image data of the source image must be used, not the image as it is rendered (e.g. width
and height
attributes on the source element have no effect on how the object is interpreted when rendering the image for CanvasRenderingContext2D
APIs).
When a CanvasImageSource
object represents an ImageBitmap
, the object's bitmap image data must be used as the source image.
When a CanvasImageSource
object represents an OffscreenCanvas
, the object's bitmap must be used as the source image.
When a CanvasImageSource
object represents a VideoFrame
, the object's pixel data must be used as the source image, and the source image's dimensions must be the object's [[display width]] and [[display height]].
An object image is not origin-clean if, switching on image's type:
HTMLOrSVGImageElement
image's current request's image data is CORS-cross-origin.
HTMLVideoElement
image's media data is CORS-cross-origin.
HTMLCanvasElement
ImageBitmap
OffscreenCanvas
image's bitmap's origin-clean flag is false.
context.fillStyle [ = value ]
Returns the current style used for filling shapes.
Can be set, to change the fill style.
The style can be either a string containing a CSS color, or a CanvasGradient
or CanvasPattern
object. Invalid values are ignored.
context.strokeStyle [ = value ]
Returns the current style used for stroking shapes.
Can be set, to change the stroke style.
The style can be either a string containing a CSS color, or a CanvasGradient
or CanvasPattern
object. Invalid values are ignored.
There are three types of gradients, linear gradients, radial gradients, and conic gradients, represented by objects implementing the opaque CanvasGradient
interface.
Once a gradient has been created (see below), stops are placed along it to define how the colors are distributed along the gradient.
gradient.addColorStop(offset, color)
Adds a color stop with the given color to the gradient at the given offset. 0.0 is the offset at one end of the gradient, 1.0 is the offset at the other end.
Throws an "IndexSizeError
" DOMException
if the offset is out of range. Throws a "SyntaxError
" DOMException
if the color cannot be parsed.
gradient = context.createLinearGradient(x0, y0, x1, y1)
Returns a CanvasGradient
object that represents a linear gradient that paints along the line given by the coordinates represented by the arguments.
gradient = context.createRadialGradient(x0, y0, r0, x1, y1, r1)
Returns a CanvasGradient
object that represents a radial gradient that paints along the cone given by the circles represented by the arguments.
If either of the radii are negative, throws an "IndexSizeError
" DOMException
exception.
gradient = context.createConicGradient(startAngle, x, y)
Returns a CanvasGradient
object that represents a conic gradient that paints clockwise along the rotation around the center represented by the arguments.
Patterns are represented by objects implementing the opaque CanvasPattern
interface.
pattern = context.createPattern(image, repetition)
Returns a CanvasPattern
object that uses the given image and repeats in the direction(s) given by the repetition argument.
The allowed values for repetition are repeat
(both directions), repeat-x
(horizontal only), repeat-y
(vertical only), and no-repeat
(neither). If the repetition argument is empty, the value repeat
is used.
If the image isn't yet fully decoded, then nothing is drawn. If the image is a canvas with no data, throws an "InvalidStateError
" DOMException
.
pattern.setTransform(transform)
Sets the transformation matrix that will be used when rendering the pattern during a fill or stroke painting operation.
Objects that implement the CanvasRect
interface provide the following methods for immediately drawing rectangles to the bitmap. The methods each take four arguments; the first two give the x and y coordinates of the top left of the rectangle, and the second two give the width w and height h of the rectangle, respectively.
context.clearRect(x, y, w, h)
Clears all pixels on the bitmap in the given rectangle to transparent black.
context.fillRect(x, y, w, h)
Paints the given rectangle onto the bitmap, using the current fill style.
context.strokeRect(x, y, w, h)
Paints the box that outlines the given rectangle onto the bitmap, using the current stroke style.
Support in all current engines.
context.fillText(text, x, y [, maxWidth ])
context.strokeText(text, x, y [, maxWidth ])
Fills or strokes (respectively) the given text at the given position. If a maximum width is provided, the text will be scaled to fit that width if necessary.
metrics = context.measureText(text)
Returns a TextMetrics
object with the metrics of the given text in the current font.
metrics.width
metrics.actualBoundingBoxLeft
metrics.actualBoundingBoxRight
metrics.fontBoundingBoxAscent
metrics.fontBoundingBoxDescent
metrics.actualBoundingBoxAscent
metrics.actualBoundingBoxDescent
metrics.emHeightAscent
metrics.emHeightDescent
metrics.hangingBaseline
metrics.alphabeticBaseline
metrics.ideographicBaseline
Returns the measurement described below.
width
attributeThe width of that inline box, in CSS pixels. (The text's advance width.)
actualBoundingBoxLeft
attributeThe distance parallel to the baseline from the alignment point given by the textAlign
attribute to the left side of the bounding rectangle of the given text, in CSS pixels; positive numbers indicating a distance going left from the given alignment point.
The sum of this value and the next (actualBoundingBoxRight
) can be wider than the width of the inline box (width
), in particular with slanted fonts where characters overhang their advance width.
actualBoundingBoxRight
attributeThe distance parallel to the baseline from the alignment point given by the textAlign
attribute to the right side of the bounding rectangle of the given text, in CSS pixels; positive numbers indicating a distance going right from the given alignment point.
fontBoundingBoxAscent
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the ascent metric of the first available font, in CSS pixels; positive numbers indicating a distance going up from the given baseline.
This value and the next are useful when rendering a background that have to have a consistent height even if the exact text being rendered changes. The actualBoundingBoxAscent
attribute (and its corresponding attribute for the descent) are useful when drawing a bounding box around specific text.
fontBoundingBoxDescent
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the descent metric of the first available font, in CSS pixels; positive numbers indicating a distance going down from the given baseline.
actualBoundingBoxAscent
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the top of the bounding rectangle of the given text, in CSS pixels; positive numbers indicating a distance going up from the given baseline.
This number can vary greatly based on the input text, even if the first font specified covers all the characters in the input. For example, the actualBoundingBoxAscent
of a lowercase "o" from an alphabetic baseline would be less than that of an uppercase "F". The value can easily be negative; for example, the distance from the top of the em box (textBaseline
value "top
") to the top of the bounding rectangle when the given text is just a single comma ",
" would likely (unless the font is quite unusual) be negative.
actualBoundingBoxDescent
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the bottom of the bounding rectangle of the given text, in CSS pixels; positive numbers indicating a distance going down from the given baseline.
emHeightAscent
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the highest top of the em squares in the inline box, in CSS pixels; positive numbers indicating that the given baseline is below the top of that em square (so this value will usually be positive). Zero if the given baseline is the top of that em square; half the font size if the given baseline is the middle of that em square.
emHeightDescent
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the lowest bottom of the em squares in the inline box, in CSS pixels; positive numbers indicating that the given baseline is above the bottom of that em square. (Zero if the given baseline is the bottom of that em square.)
hangingBaseline
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the hanging baseline of the inline box, in CSS pixels; positive numbers indicating that the given baseline is below the hanging baseline. (Zero if the given baseline is the hanging baseline.)
alphabeticBaseline
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the alphabetic baseline of the inline box, in CSS pixels; positive numbers indicating that the given baseline is below the alphabetic baseline. (Zero if the given baseline is the alphabetic baseline.)
ideographicBaseline
attributeThe distance from the horizontal line indicated by the textBaseline
attribute to the ideographic-under baseline of the inline box, in CSS pixels; positive numbers indicating that the given baseline is below the ideographic-under baseline. (Zero if the given baseline is the ideographic-under baseline.)
Glyphs rendered using fillText()
and strokeText()
can spill out of the box given by the font size (the em square size) and the width returned by measureText()
(the text width). Authors are encouraged to use the bounding box values described above if this is an issue.
A future version of the 2D context API might provide a way to render fragments of documents, rendered using CSS, straight to the canvas. This would be provided in preference to a dedicated way of doing multiline layout.
Objects that implement the CanvasDrawPath
interface have a current default path. There is only one current default path, it is not part of the drawing state. The current default path is a path, as described above.
context.beginPath()
Resets the current default path.
context.fill([ fillRule ])
context.fill(path [, fillRule ])
Fills the subpaths of the current default path or the given path with the current fill style, obeying the given fill rule.
context.stroke()
context.stroke(path)
Strokes the subpaths of the current default path or the given path with the current stroke style.
context.clip([ fillRule ])
context.clip(path [, fillRule ])
Further constrains the clipping region to the current default path or the given path, using the given fill rule to determine what points are in the path.
context.isPointInPath(x, y [, fillRule ])
context.isPointInPath(path, x, y [, fillRule ])
Returns true if the given point is in the current default path or the given path, using the given fill rule to determine what points are in the path.
context.isPointInStroke(x, y)
context.isPointInStroke(path, x, y)
Returns true if the given point would be in the region covered by the stroke of the current default path or the given path, given the current stroke style.
This canvas
element has a couple of checkboxes. The path-related commands are highlighted:
< canvas height = 400 width = 750 >
< label >< input type = checkbox id = showA > Show As</ label >
< label >< input type = checkbox id = showB > Show Bs</ label >
<!-- ... -->
</ canvas >
< script >
function drawCheckbox( context, element, x, y, paint) {
context. save();
context. font = '10px sans-serif' ;
context. textAlign = 'left' ;
context. textBaseline = 'middle' ;
var metrics = context. measureText( element. labels[ 0 ]. textContent);
if ( paint) {
context. beginPath();
context. strokeStyle = 'black' ;
context. rect( x- 5 , y- 5 , 10 , 10 );
context. stroke();
if ( element. checked) {
context. fillStyle = 'black' ;
context. fill();
}
context. fillText( element. labels[ 0 ]. textContent, x+ 5 , y);
}
context. beginPath();
context. rect( x- 7 , y- 7 , 12 + metrics. width+ 2 , 14 );
context. drawFocusIfNeeded( element);
context. restore();
}
function drawBase() { /* ... */ }
function drawAs() { /* ... */ }
function drawBs() { /* ... */ }
function redraw() {
var canvas = document. getElementsByTagName( 'canvas' )[ 0 ];
var context = canvas. getContext( '2d' );
context. clearRect( 0 , 0 , canvas. width, canvas. height);
drawCheckbox( context, document. getElementById( 'showA' ), 20 , 40 , true );
drawCheckbox( context, document. getElementById( 'showB' ), 20 , 60 , true );
drawBase();
if ( document. getElementById( 'showA' ). checked)
drawAs();
if ( document. getElementById( 'showB' ). checked)
drawBs();
}
function processClick( event) {
var canvas = document. getElementsByTagName( 'canvas' )[ 0 ];
var context = canvas. getContext( '2d' );
var x = event. clientX;
var y = event. clientY;
var node = event. target;
while ( node) {
x -= node. offsetLeft - node. scrollLeft;
y -= node. offsetTop - node. scrollTop;
node = node. offsetParent;
}
drawCheckbox( context, document. getElementById( 'showA' ), 20 , 40 , false );
if ( context. isPointInPath( x, y) )
document. getElementById( 'showA' ). checked = ! ( document. getElementById( 'showA' ). checked);
drawCheckbox( context, document. getElementById( 'showB' ), 20 , 60 , false );
if ( context. isPointInPath( x, y) )
document. getElementById( 'showB' ). checked = ! ( document. getElementById( 'showB' ). checked);
redraw();
}
document. getElementsByTagName( 'canvas' )[ 0 ]. addEventListener( 'focus' , redraw, true );
document. getElementsByTagName( 'canvas' )[ 0 ]. addEventListener( 'blur' , redraw, true );
document. getElementsByTagName( 'canvas' )[ 0 ]. addEventListener( 'change' , redraw, true );
document. getElementsByTagName( 'canvas' )[ 0 ]. addEventListener( 'click' , processClick, false );
redraw();
</ script >
context.drawFocusIfNeeded(element)
If element is focused, draws a focus ring around the current default path, following the platform conventions for focus rings.
context.drawFocusIfNeeded(path, element)
If element is focused, draws a focus ring around path, following the platform conventions for focus rings.
Objects that implement the CanvasDrawImage
interface have the drawImage()
method to draw images.
context.drawImage(image, dx, dy)
context.drawImage(image, dx, dy, dw, dh)
context.drawImage(image, sx, sy, sw, sh, dx, dy, dw, dh)
Draws the given image onto the canvas. The arguments are interpreted as follows:
If the image isn't yet fully decoded, then nothing is drawn. If the image is a canvas with no data, throws an "InvalidStateError
" DOMException
.
imagedata = new ImageData(sw, sh [, settings])
Returns an ImageData
object with the given dimensions and the color space indicated by settings. All the pixels in the returned object are transparent black.
Throws an "IndexSizeError
" DOMException
if either of the width or height arguments are zero.
imagedata = new ImageData(data, sw [, sh [, settings ] ])
Returns an ImageData
object using the data provided in the Uint8ClampedArray
argument, interpreted using the given dimensions and the color space indicated by settings.
As each pixel in the data is represented by four numbers, the length of the data needs to be a multiple of four times the given width. If the height is provided as well, then the length needs to be exactly the width times the height times 4.
Throws an "IndexSizeError
" DOMException
if the given data and dimensions can't be interpreted consistently, or if either dimension is zero.
imagedata = context.createImageData(imagedata)
Returns an ImageData
object with the same dimensions and color space as the argument. All the pixels in the returned object are transparent black.
imagedata = context.createImageData(sw, sh [, settings])
Returns an ImageData
object with the given dimensions. The color space of the returned object is the color space of context unless overridden by settings. All the pixels in the returned object are transparent black.
Throws an "IndexSizeError
" DOMException
if either of the width or height arguments are zero.
imagedata = context.getImageData(sx, sy, sw, sh [, settings])
Returns an ImageData
object containing the image data for the given rectangle of the bitmap. The color space of the returned object is the color space of context unless overridden by settings.
Throws an "IndexSizeError
" DOMException
if the either of the width or height arguments are zero.
imagedata.width
imagedata.height
Returns the actual dimensions of the data in the ImageData
object, in pixels.
imagedata.data
Returns the one-dimensional array containing the data in RGBA order, as integers in the range 0 to 255.
imagedata.colorSpace
Returns the color space of the pixels.
context.putImageData(imagedata, dx, dy [, dirtyX, dirtyY, dirtyWidth, dirtyHeight ])
Paints the data from the given ImageData
object onto the bitmap. If a dirty rectangle is provided, only the pixels from that rectangle are painted.
The globalAlpha
and globalCompositeOperation
properties, as well as the shadow attributes, are ignored for the purposes of this method call; pixels in the canvas are replaced wholesale, with no composition, alpha blending, no shadows, etc.
Throws an "InvalidStateError
" DOMException
if the imagedata object's data
attribute value's [[ViewedArrayBuffer]] internal slot is detached.
In the following example, the script generates an ImageData
object so that it can draw onto it.
// canvas is a reference to a <canvas> element
var context = canvas. getContext( '2d' );
// create a blank slate
var data = context. createImageData( canvas. width, canvas. height);
// create some plasma
FillPlasma( data, 'green' ); // green plasma
// add a cloud to the plasma
AddCloud( data, data. width/ 2 , data. height/ 2 ); // put a cloud in the middle
// paint the plasma+cloud on the canvas
context. putImageData( data, 0 , 0 );
// support methods
function FillPlasma( data, color) { ... }
function AddCloud( data, x, y) { ... }
Here is an example of using getImageData()
and putImageData()
to implement an edge detection filter.
<!DOCTYPE HTML>
< html lang = "en" >
< head >
< title > Edge detection demo</ title >
< script >
var image = new Image();
function init() {
image. onload = demo;
image. src = "image.jpeg" ;
}
function demo() {
var canvas = document. getElementsByTagName( 'canvas' )[ 0 ];
var context = canvas. getContext( '2d' );
// draw the image onto the canvas
context. drawImage( image, 0 , 0 );
// get the image data to manipulate
var input = context. getImageData( 0 , 0 , canvas. width, canvas. height);
// get an empty slate to put the data into
var output = context. createImageData( canvas. width, canvas. height);
// alias some variables for convenience
// In this case input.width and input.height
// match canvas.width and canvas.height
// but we'll use the former to keep the code generic.
var w = input. width, h = input. height;
var inputData = input. data;
var outputData = output. data;
// edge detection
for ( var y = 1 ; y < h- 1 ; y += 1 ) {
for ( var x = 1 ; x < w- 1 ; x += 1 ) {
for ( var c = 0 ; c < 3 ; c += 1 ) {
var i = ( y* w + x) * 4 + c;
outputData[ i] = 127 + - inputData[ i - w* 4 - 4 ] - inputData[ i - w* 4 ] - inputData[ i - w* 4 + 4 ] +
- inputData[ i - 4 ] + 8 * inputData[ i] - inputData[ i + 4 ] +
- inputData[ i + w* 4 - 4 ] - inputData[ i + w* 4 ] - inputData[ i + w* 4 + 4 ];
}
outputData[( y* w + x) * 4 + 3 ] = 255 ; // alpha
}
}
// put the image data back after manipulation
context. putImageData( output, 0 , 0 );
}
</ script >
</ head >
< body onload = "init()" >
< canvas ></ canvas >
</ body >
</ html >
Here is an example of color space conversion applied when drawing a solid color and reading the result back using and getImageData()
.
<!DOCTYPE HTML>
< html lang = "en" >
< title > Color space image data demo</ title >
< canvas ></ canvas >
< script >
const canvas = document. querySelector( 'canvas' );
const context = canvas. getContext( '2d' , { colorSpace: 'display-p3' });
// Draw a red rectangle. Note that the hex color notation
// specifies sRGB colors.
context. fillStyle = "#FF0000" ;
context. fillRect( 0 , 0 , 64 , 64 );
// Get the image data.
const pixels = context. getImageData( 0 , 0 , 1 , 1 );
// This will print 'display-p3', reflecting the default behavior
// of returning image data in the canvas's color space.
console. log( pixels. colorSpace);
// This will print the values 234, 51, and 35, reflecting the
// red fill color, converted to 'display-p3'.
console. log( pixels. data[ 0 ]);
console. log( pixels. data[ 1 ]);
console. log( pixels. data[ 2 ]);
</ script >
context.globalAlpha [ = value ]
Returns the current global alpha value applied to rendering operations.
Can be set, to change the global alpha value. Values outside of the range 0.0 .. 1.0 are ignored.
context.globalCompositeOperation [ = value ]
Returns the current compositing and blending operator, from the values defined in Compositing and Blending. [COMPOSITE]
Can be set, to change the current compositing and blending operator. Unknown values are ignored.
context.imageSmoothingEnabled [ = value ]
Returns whether pattern fills and the drawImage()
method will attempt to smooth images if their pixels don't line up exactly with the display, when scaling images up.
Can be set, to change whether images are smoothed (true) or not (false).
context.imageSmoothingQuality [ = value ]
Returns the current image-smoothing-quality preference.
Can be set, to change the preferred quality of image smoothing. The possible values are "low
", "medium
" and "high
". Unknown values are ignored.
All drawing operations on an object which implements the CanvasShadowStyles
interface are affected by the four global shadow attributes.
context.shadowColor [ = value ]
Returns the current shadow color.
Can be set, to change the shadow color. Values that cannot be parsed as CSS colors are ignored.
context.shadowOffsetX [ = value ]
context.shadowOffsetY [ = value ]
Returns the current shadow offset.
Can be set, to change the shadow offset. Values that are not finite numbers are ignored.
context.shadowBlur [ = value ]
Returns the current level of blur applied to shadows.
Can be set, to change the blur level. Values that are not finite numbers greater than or equal to zero are ignored.
If the current compositing and blending operator is "copy
", then shadows effectively won't render (since the shape will overwrite the shadow).
All drawing operations on an object which implements the CanvasFilters
interface are affected by the global filter
attribute.
context.filter [ = value ]
Returns the current filter.
Can be set, to change the filter. Values can either be the string "none
" or a string parseable as a <filter-value-list>. Other values are ignored.
Though context.
will disable filters for the context, filter
= "none
"context.
, filter
= ""context.
, and filter
= nullcontext.
are all treated as unparseable inputs and the value of the current filter is left unchanged.filter
= undefined
Coordinates used in the value of the current filter are interpreted such that one pixel is equivalent to one SVG user space unit and to one canvas coordinate space unit. Filter coordinates are not affected by the current transformation matrix. The current transformation matrix affects only the input to the filter. Filters are applied in the output bitmap's coordinate space.
Since drawing is performed using filter value "none
" until an externally-defined filter has finished loading, authors might wish to determine whether such a filter has finished loading before proceeding with a drawing operation. One way to accomplish this is to load the externally-defined filter elsewhere within the same page in some element that sends a load
event (for example, an SVG use
element), and wait for the load
event to be dispatched.
When a canvas is interactive, authors should include focusable elements in the element's fallback content corresponding to each focusable part of the canvas, as in the example above.
When rendering focus rings, to ensure that focus rings have the appearance of native focus rings, authors should use the drawFocusIfNeeded()
method, passing it the element for which a ring is being drawn. This method only draws the focus ring if the element is focused, so that it can simply be called whenever drawing the element, without checking whether the element is focused or not first.
Authors should avoid implementing text editing controls using the canvas
element. Doing so has a large number of disadvantages:
This is a huge amount of work, and authors are most strongly encouraged to avoid doing any of it by instead using the input
element, the textarea
element, or the contenteditable
attribute.
Here is an example of a script that uses canvas to draw pretty glowing lines.
< canvas width = "800" height = "450" ></ canvas >
< script >
var context = document. getElementsByTagName( 'canvas' )[ 0 ]. getContext( '2d' );
var lastX = context. canvas. width * Math. random();
var lastY = context. canvas. height * Math. random();
var hue = 0 ;
function line() {
context. save();
context. translate( context. canvas. width/ 2 , context. canvas. height/ 2 );
context. scale( 0.9 , 0.9 );
context. translate( - context. canvas. width/ 2 , - context. canvas. height/ 2 );
context. beginPath();
context. lineWidth = 5 + Math. random() * 10 ;
context. moveTo( lastX, lastY);
lastX = context. canvas. width * Math. random();
lastY = context. canvas. height * Math. random();
context. bezierCurveTo( context. canvas. width * Math. random(),
context. canvas. height * Math. random(),
context. canvas. width * Math. random(),
context. canvas. height * Math. random(),
lastX, lastY);
hue = hue + 10 * Math. random();
context. strokeStyle = 'hsl(' + hue + ', 50%, 50%)' ;
context. shadowColor = 'white' ;
context. shadowBlur = 10 ;
context. stroke();
context. restore();
}
setInterval( line, 50 );
function blank() {
context. fillStyle = 'rgba(0,0,0,0.1)' ;
context. fillRect( 0 , 0 , context. canvas. width, context. canvas. height);
}
setInterval( blank, 40 );
</ script >
The 2D rendering context for canvas
is often used for sprite-based games. The following example demonstrates this:
Here is the source for this example:
<!DOCTYPE HTML>
< html lang = "en" >
< meta charset = "utf-8" >
< title > Blue Robot Demo</ title >
< style >
html { overflow : hidden ; min-height : 200 px ; min-width : 380 px ; }
body { height : 200 px ; position : relative ; margin : 8 px ; }
. buttons { position : absolute ; bottom : 0 px ; left : 0 px ; margin : 4 px ; }
</ style >
< canvas width = "380" height = "200" ></ canvas >
< script >
var Landscape = function ( context, width, height) {
this . offset = 0 ;
this . width = width;
this . advance = function ( dx) {
this . offset += dx;
};
this . horizon = height * 0.7 ;
// This creates the sky gradient (from a darker blue to white at the bottom)
this . sky = context. createLinearGradient( 0 , 0 , 0 , this . horizon);
this . sky. addColorStop( 0.0 , 'rgb(55,121,179)' );
this . sky. addColorStop( 0.7 , 'rgb(121,194,245)' );
this . sky. addColorStop( 1.0 , 'rgb(164,200,214)' );
// this creates the grass gradient (from a darker green to a lighter green)
this . earth = context. createLinearGradient( 0 , this . horizon, 0 , height);
this . earth. addColorStop( 0.0 , 'rgb(81,140,20)' );
this . earth. addColorStop( 1.0 , 'rgb(123,177,57)' );
this . paintBackground = function ( context, width, height) {
// first, paint the sky and grass rectangles
context. fillStyle = this . sky;
context. fillRect( 0 , 0 , width, this . horizon);
context. fillStyle = this . earth;
context. fillRect( 0 , this . horizon, width, height- this . horizon);
// then, draw the cloudy banner
// we make it cloudy by having the draw text off the top of the
// canvas, and just having the blurred shadow shown on the canvas
context. save();
context. translate( width- (( this . offset+ ( this . width* 3.2 )) % ( this . width* 4.0 )) + 0 , 0 );
context. shadowColor = 'white' ;
context. shadowOffsetY = 30 + this . horizon/ 3 ; // offset down on canvas
context. shadowBlur = '5' ;
context. fillStyle = 'white' ;
context. textAlign = 'left' ;
context. textBaseline = 'top' ;
context. font = '20px sans-serif' ;
context. fillText( 'WHATWG ROCKS' , 10 , - 30 ); // text up above canvas
context. restore();
// then, draw the background tree
context. save();
context. translate( width- (( this . offset+ ( this . width* 0.2 )) % ( this . width* 1.5 )) + 30 , 0 );
context. beginPath();
context. fillStyle = 'rgb(143,89,2)' ;
context. lineStyle = 'rgb(10,10,10)' ;
context. lineWidth = 2 ;
context. rect( 0 , this . horizon+ 5 , 10 , - 50 ); // trunk
context. fill();
context. stroke();
context. beginPath();
context. fillStyle = 'rgb(78,154,6)' ;
context. arc( 5 , this . horizon- 60 , 30 , 0 , Math. PI* 2 ); // leaves
context. fill();
context. stroke();
context. restore();
};
this . paintForeground = function ( context, width, height) {
// draw the box that goes in front
context. save();
context. translate( width- (( this . offset+ ( this . width* 0.7 )) % ( this . width* 1.1 )) + 0 , 0 );
context. beginPath();
context. rect( 0 , this . horizon - 5 , 25 , 25 );
context. fillStyle = 'rgb(220,154,94)' ;
context. lineStyle = 'rgb(10,10,10)' ;
context. lineWidth = 2 ;
context. fill();
context. stroke();
context. restore();
};
};
</ script >
< script >
var BlueRobot = function () {
this . sprites = new Image();
this . sprites. src = 'blue-robot.png' ; // this sprite sheet has 8 cells
this . targetMode = 'idle' ;
this . walk = function () {
this . targetMode = 'walk' ;
};
this . stop = function () {
this . targetMode = 'idle' ;
};
this . frameIndex = {
'idle' : [ 0 ], // first cell is the idle frame
'walk' : [ 1 , 2 , 3 , 4 , 5 , 6 ], // the walking animation is cells 1-6
'stop' : [ 7 ], // last cell is the stopping animation
};
this . mode = 'idle' ;
this . frame = 0 ; // index into frameIndex
this . tick = function () {
// this advances the frame and the robot
// the return value is how many pixels the robot has moved
this . frame += 1 ;
if ( this . frame >= this . frameIndex[ this . mode]. length) {
// we've reached the end of this animation cycle
this . frame = 0 ;
if ( this . mode != this . targetMode) {
// switch to next cycle
if ( this . mode == 'walk' ) {
// we need to stop walking before we decide what to do next
this . mode = 'stop' ;
} else if ( this . mode == 'stop' ) {
if ( this . targetMode == 'walk' )
this . mode = 'walk' ;
else
this . mode = 'idle' ;
} else if ( this . mode == 'idle' ) {
if ( this . targetMode == 'walk' )
this . mode = 'walk' ;
}
}
}
if ( this . mode == 'walk' )
return 8 ;
return 0 ;
},
this . paint = function ( context, x, y) {
if ( ! this . sprites. complete) return ;
// draw the right frame out of the sprite sheet onto the canvas
// we assume each frame is as high as the sprite sheet
// the x,y coordinates give the position of the bottom center of the sprite
context. drawImage( this . sprites,
this . frameIndex[ this . mode][ this . frame] * this . sprites. height, 0 , this . sprites. height, this . sprites. height,
x- this . sprites. height/ 2 , y- this . sprites. height, this . sprites. height, this . sprites. height);
};
};
</ script >
< script >
var canvas = document. getElementsByTagName( 'canvas' )[ 0 ];
var context = canvas. getContext( '2d' );
var landscape = new Landscape( context, canvas. width, canvas. height);
var blueRobot = new BlueRobot();
// paint when the browser wants us to, using requestAnimationFrame()
function paint() {
context. clearRect( 0 , 0 , canvas. width, canvas. height);
landscape. paintBackground( context, canvas. width, canvas. height);
blueRobot. paint( context, canvas. width/ 2 , landscape. horizon* 1.1 );
landscape. paintForeground( context, canvas. width, canvas. height);
requestAnimationFrame( paint);
}
paint();
// but tick every 100ms, so that we don't slow down when we don't paint
setInterval( function () {
var dx = blueRobot. tick();
landscape. advance( dx);
}, 100 );
</ script >
< p class = "buttons" >
< input type = button value = "Walk" onclick = "blueRobot.walk()" >
< input type = button value = "Stop" onclick = "blueRobot.stop()" >
< footer >
< small > Blue Robot Player Sprite by < a href = "https://johncolburn.deviantart.com/" > JohnColburn</ a > .
Licensed under the terms of the Creative Commons Attribution Share-Alike 3.0 Unported license.</ small >
< small > This work is itself licensed under a < a rel = "license" href = "https://creativecommons.org/licenses/by-sa/3.0/" > Creative
Commons Attribution-ShareAlike 3.0 Unported License</ a > .</ small >
</ footer >
ImageBitmap
rendering contextImageBitmapRenderingContext
is a performance-oriented interface that provides a low overhead method for displaying the contents of ImageBitmap
objects. It uses transfer semantics to reduce overall memory consumption. It also streamlines performance by avoiding intermediate compositing, unlike the drawImage()
method of CanvasRenderingContext2D
.
Using an img
element as an intermediate for getting an image resource into a canvas, for example, would result in two copies of the decoded image existing in memory at the same time: the img
element's copy, and the one in the canvas's backing store. This memory cost can be prohibitive when dealing with extremely large images. This can be avoided by using ImageBitmapRenderingContext
.
Using ImageBitmapRenderingContext
, here is how to transcode an image to the JPEG format in a memory- and CPU-efficient way:
createImageBitmap( inputImageBlob). then( image => {
const canvas = document. createElement( 'canvas' );
const context = canvas. getContext( 'bitmaprenderer' );
context. transferFromImageBitmap( image);
canvas. toBlob( outputJPEGBlob => {
// Do something with outputJPEGBlob.
}, 'image/jpeg' );
});
ImageBitmapRenderingContext
interfaceSupport in all current engines.
context = canvas.getContext('bitmaprenderer' [, { [ alpha: false ] } ])
Returns an ImageBitmapRenderingContext
object that is permanently bound to a particular canvas
element.
If the alpha
setting is provided and set to false, then the canvas is forced to always be opaque.
context.canvas
Returns the canvas
element that the context is bound to.
context.transferFromImageBitmap(imageBitmap)
Transfers the underlying bitmap data from imageBitmap to context, and the bitmap becomes the contents of the canvas
element to which context is bound.
context.transferFromImageBitmap(null)
Replaces contents of the canvas
element to which context is bound with a transparent black bitmap whose size corresponds to the width
and height
content attributes of the canvas
element.
OffscreenCanvas
interfaceSupport in all current engines.
OffscreenCanvas
is an EventTarget
, so both OffscreenCanvasRenderingContext2D
and WebGL can fire events at it. OffscreenCanvasRenderingContext2D
can fire contextlost
and contextrestored
, and WebGL can fire webglcontextlost
and webglcontextrestored
. [WEBGL]
OffscreenCanvas
objects are used to create rendering contexts, much like an HTMLCanvasElement
, but with no connection to the DOM. This makes it possible to use canvas rendering contexts in workers.
An OffscreenCanvas
object may hold a weak reference to a placeholder canvas
element, which is typically in the DOM, whose embedded content is provided by the OffscreenCanvas
object. The bitmap of the OffscreenCanvas
object is pushed to the placeholder canvas
element as part of the OffscreenCanvas
's relevant agent's event loop's update the rendering steps.
offscreenCanvas = new OffscreenCanvas(width, height)
Returns a new OffscreenCanvas
object that is not linked to a placeholder canvas
element, and whose bitmap's size is determined by the width and height arguments.
context = offscreenCanvas.getContext(contextId [, options ])
Returns an object that exposes an API for drawing on the OffscreenCanvas
object. contextId specifies the desired API: "2d
", "bitmaprenderer
", "webgl
", "webgl2
", or "webgpu
". options is handled by that API.
This specification defines the "2d
" context below, which is similar but distinct from the "2d
" context that is created from a canvas
element. The WebGL specifications define the "webgl
" and "webgl2
" contexts. WebGPU defines the "webgpu
" context. [WEBGL] [WEBGPU]
Returns null if the canvas has already been initialized with another context type (e.g., trying to get a "2d
" context after getting a "webgl
" context).
offscreenCanvas.width [ = value ]
offscreenCanvas.height [ = value ]
These attributes return the dimensions of the OffscreenCanvas
object's bitmap.
They can be set, to replace the bitmap with a new, transparent black bitmap of the specified dimensions (effectively resizing it).
If an OffscreenCanvas
object whose dimensions were changed has a placeholder canvas
element, then the placeholder canvas
element's natural size will only be updated during the OffscreenCanvas
's relevant agent's event loop's update the rendering steps.
promise = offscreenCanvas.convertToBlob([options])
Returns a promise that will fulfill with a new Blob
object representing a file containing the image in the OffscreenCanvas
object.
The argument, if provided, is a dictionary that controls the encoding options of the image file to be created. The type
field specifies the file format and has a default value of "image/png
"; that type is also used if the requested type isn't supported. If the image format supports variable quality (such as "image/jpeg
"), then the quality
field is a number in the range 0.0 to 1.0 inclusive indicating the desired quality level for the resulting image.
canvas.transferToImageBitmap()
Returns a newly created ImageBitmap
object with the image in the OffscreenCanvas
object. The image in the OffscreenCanvas
object is replaced with a new blank image.
The following are the event handlers (and their corresponding event handler event types) supported, as event handler IDL attributes, by all objects implementing the OffscreenCanvas
interface:
イベントハンドラー | イベントハンドラーイベント型 |
---|---|
oncontextlost | contextlost |
oncontextrestored | contextrestored |
OffscreenCanvasRenderingContext2D
Support in all current engines.
The OffscreenCanvasRenderingContext2D
object is a rendering context for drawing to the bitmap of an OffscreenCanvas
object. It is similar to the CanvasRenderingContext2D
object, with the following differences:
there is no support for user interface features;
its canvas
attribute refers to an OffscreenCanvas
object rather than a canvas
element;
An OffscreenCanvasRenderingContext2D
object has a bitmap that is initialized when the object is created.
The bitmap has an origin-clean flag, which can be set to true or false. Initially, when one of these bitmaps is created, its origin-clean flag must be set to true.
An OffscreenCanvasRenderingContext2D
object also has an alpha flag, which can be set to true or false. Initially, when the context is created, its alpha flag must be set to true. When an OffscreenCanvasRenderingContext2D
object has its alpha flag set to false, then its alpha channel must be fixed to 1.0 (fully opaque) for all pixels, and attempts to change the alpha component of any pixel must be silently ignored.
An OffscreenCanvasRenderingContext2D
object also has a color space setting of type PredefinedColorSpace
. The color space for the context's bitmap is set to the context's color space.
An OffscreenCanvasRenderingContext2D
object has an associated OffscreenCanvas
object, which is the OffscreenCanvas
object from which the OffscreenCanvasRenderingContext2D
object was created.
offscreenCanvas = offscreenCanvasRenderingContext2D.canvas
Returns the associated OffscreenCanvas
object.
Premultiplied alpha refers to one way of representing transparency in an image, the other being non-premultiplied alpha.
Under non-premultiplied alpha, the red, green, and blue channels of a pixel represent that pixel's color, and its alpha channel represents that pixel's opacity.
Under premultiplied alpha, however, the red, green, and blue channels of a pixel represent the amounts of color that the pixel adds to the image, and its alpha channel represents the amount that the pixel obscures whatever is behind it.
For instance, assuming the color channels range from 0 (off) to 255 (full intensity), these example colors are represented in the following ways:
CSS color representation | Premultiplied representation | Non-premultiplied representation | Description of color | Image of color blended above other content |
---|---|---|---|---|
rgba(255, 127, 0, 1) | 255, 127, 0, 255 | 255, 127, 0, 255 | Completely-opaque orange | |
rgba(255, 255, 0, 0.5) | 127, 127, 0, 127 | 255, 255, 0, 127 | Halfway-opaque yellow | |
Unrepresentable | 255, 127, 0, 127 | Unrepresentable | Additive halfway-opaque orange | |
Unrepresentable | 255, 127, 0, 0 | Unrepresentable | Additive fully-transparent orange | |
rgba(255, 127, 0, 0) | 0, 0, 0, 0 | 255, 127, 0, 0 | Fully-transparent ("invisible") orange | |
rgba(0, 127, 255, 0) | 0, 0, 0, 0 | 255, 127, 0, 0 | Fully-transparent ("invisible") turquoise |
Converting a color value from a non-premultiplied representation to a premultiplied one involves multiplying the color's red, green, and blue channels by its alpha channel (remapping the range of the alpha channel such that "fully transparent" is 0, and "fully opaque" is 1).
Converting a color value from a premultiplied representation to a non-premultiplied one involves the inverse: dividing the color's red, green, and blue channels by its alpha channel.
As certain colors can only be represented under premultiplied alpha (for instance, additive colors), and others can only be represented under non-premultiplied alpha (for instance, "invisible" colors which hold certain red, green, and blue values even with no opacity); and division and multiplication on 8-bit integers (which is how canvas's colors are currently stored) entails a loss of precision, converting between premultiplied and non-premultiplied alpha is a lossy operation on colors that are not fully opaque.
A CanvasRenderingContext2D
's output bitmap and an OffscreenCanvasRenderingContext2D
's bitmap must use premultiplied alpha to represent transparent colors.
It is important for canvas bitmaps to represent colors using premultiplied alpha because it affects the range of representable colors. While additive colors cannot currently be drawn onto canvases directly because CSS colors are non-premultiplied and cannot represent them, it is still possible to, for instance, draw additive colors onto a WebGL canvas and then draw that WebGL canvas onto a 2D canvas via drawImage()
.