Google\Cloud\Vision\V1
Classes
Request for performing Google Cloud Vision API tasks over a user-provided
image, with user-requested features.
Response to an image annotation request.
Multiple image annotation requests are batched into a single service call.
Response to a batch image annotation request.
Logical element on the page.
Type of a block (text, image etc) as identified by OCR.
A bounding polygon for the detected image annotation.
Color information consists of RGB channels, score, and the fraction of
the image that the color occupies in the image.
Single crop hint that is used to generate a new crop when serving an image.
Set of crop hints that are used to generate new crops when serving images.
Parameters for crop hints annotation request.
Set of dominant colors and their corresponding scores.
Set of detected entity features.
A face annotation object contains the results of face detection.
A face-specific landmark (for example, a face feature).
Face landmark (feature) type.
Users describe the type of Google Cloud Vision API tasks to perform over
images by using Features. Each Feature indicates a type of image
detection task to perform. Features encode the Cloud Vision API
vertical to operate on and the number of top-scoring results to return.
Type of image feature.
Client image to perform Google Cloud Vision API tasks over.
Service that performs Google Cloud Vision API detection tasks over client
images, such as face, landmark, logo, label, and text detection. The
ImageAnnotator service returns detected entities from the images.
Image context and/or feature-specific parameters.
Stores image properties, such as dominant colors.
External image source (Google Cloud Storage image location).
Rectangle determined by min and max
LatLng
pairs.
A bucketized representation of likelihood, which is intended to give clients
highly stable results across model upgrades.
Detected entity location information.
Detected page from OCR.
Structural unit of text representing a number of words in certain order.
A 3D position in the image, used primarily for Face detection landmarks.
A
Property
consists of a user-supplied name/value pair.
Set of features pertaining to the image, computed by computer vision
methods over safe-search verticals (for example, adult, spoof, medical,
violence).
A single symbol representation.
TextAnnotation contains a structured representation of OCR extracted text.
Detected start or end of a structural component.
Enum to denote the type of break found. New line, space etc.
Detected language for a structural component.
Additional information detected on the structural component.
A vertex represents a 2D point in the image.
Relevant information for the image from the Internet.
Entity deduced from similar images on the Internet.
Metadata for online images.
Metadata for web pages.
A word representation.