EntityAnnotation
class EntityAnnotation extends Message
Set of detected entity features.
Protobuf type Google\Cloud\Vision\V1\EntityAnnotation
Methods
No description
Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
The language code for the locale in which the entity textual
description
is expressed.
The language code for the locale in which the entity textual
description
is expressed.
Entity textual description, expressed in its locale
language.
Entity textual description, expressed in its locale
language.
Overall score of the result. Range [0, 1].
Overall score of the result. Range [0, 1].
The accuracy of the entity detection in an image.
The accuracy of the entity detection in an image.
The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].
The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].
Image region to which this entity belongs. Currently not produced
for LABEL_DETECTION
features. For TEXT_DETECTION
(OCR), boundingPoly
s
are produced for the entire text detected in an image region, followed by
boundingPoly
s for each word within the detected text.
Image region to which this entity belongs. Currently not produced
for LABEL_DETECTION
features. For TEXT_DETECTION
(OCR), boundingPoly
s
are produced for the entire text detected in an image region, followed by
boundingPoly
s for each word within the detected text.
The location information for the detected entity. Multiple
LocationInfo
elements can be present because one location may
indicate the location of the scene in the image, and another location
may indicate the location of the place where the image was taken.
The location information for the detected entity. Multiple
LocationInfo
elements can be present because one location may
indicate the location of the scene in the image, and another location
may indicate the location of the place where the image was taken.
Some entities may have optional user-supplied Property
(name/value)
fields, such a score or string that qualifies the entity.
Some entities may have optional user-supplied Property
(name/value)
fields, such a score or string that qualifies the entity.
Details
at line 90
__construct()
at line 102
string
getMid()
Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
Generated from protobuf field string mid = 1;
at line 114
setMid(string $var)
Opaque entity ID. Some IDs may be available in Google Knowledge Graph Search API.
Generated from protobuf field string mid = 1;
at line 127
string
getLocale()
The language code for the locale in which the entity textual
description
is expressed.
Generated from protobuf field string locale = 2;
at line 139
setLocale(string $var)
The language code for the locale in which the entity textual
description
is expressed.
Generated from protobuf field string locale = 2;
at line 151
string
getDescription()
Entity textual description, expressed in its locale
language.
Generated from protobuf field string description = 3;
at line 162
setDescription(string $var)
Entity textual description, expressed in its locale
language.
Generated from protobuf field string description = 3;
at line 174
float
getScore()
Overall score of the result. Range [0, 1].
Generated from protobuf field float score = 4;
at line 185
setScore(float $var)
Overall score of the result. Range [0, 1].
Generated from protobuf field float score = 4;
at line 200
float
getConfidence()
The accuracy of the entity detection in an image.
For example, for an image in which the "Eiffel Tower" entity is detected, this field represents the confidence that there is a tower in the query image. Range [0, 1].
Generated from protobuf field float confidence = 5;
at line 214
setConfidence(float $var)
The accuracy of the entity detection in an image.
For example, for an image in which the "Eiffel Tower" entity is detected, this field represents the confidence that there is a tower in the query image. Range [0, 1].
Generated from protobuf field float confidence = 5;
at line 230
float
getTopicality()
The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].
Generated from protobuf field float topicality = 6;
at line 245
setTopicality(float $var)
The relevancy of the ICA (Image Content Annotation) label to the image. For example, the relevancy of "tower" is likely higher to an image containing the detected "Eiffel Tower" than to an image containing a detected distant towering building, even though the confidence that there is a tower in each image may be the same. Range [0, 1].
Generated from protobuf field float topicality = 6;
at line 260
BoundingPoly
getBoundingPoly()
Image region to which this entity belongs. Currently not produced
for LABEL_DETECTION
features. For TEXT_DETECTION
(OCR), boundingPoly
s
are produced for the entire text detected in an image region, followed by
boundingPoly
s for each word within the detected text.
Generated from protobuf field .google.cloud.vision.v1.BoundingPoly bounding_poly = 7;
at line 274
setBoundingPoly(BoundingPoly $var)
Image region to which this entity belongs. Currently not produced
for LABEL_DETECTION
features. For TEXT_DETECTION
(OCR), boundingPoly
s
are produced for the entire text detected in an image region, followed by
boundingPoly
s for each word within the detected text.
Generated from protobuf field .google.cloud.vision.v1.BoundingPoly bounding_poly = 7;
at line 290
RepeatedField
getLocations()
The location information for the detected entity. Multiple
LocationInfo
elements can be present because one location may
indicate the location of the scene in the image, and another location
may indicate the location of the place where the image was taken.
Location information is usually present for landmarks.
Generated from protobuf field repeated .google.cloud.vision.v1.LocationInfo locations = 8;
at line 305
setLocations(array|RepeatedField $var)
The location information for the detected entity. Multiple
LocationInfo
elements can be present because one location may
indicate the location of the scene in the image, and another location
may indicate the location of the place where the image was taken.
Location information is usually present for landmarks.
Generated from protobuf field repeated .google.cloud.vision.v1.LocationInfo locations = 8;
at line 318
RepeatedField
getProperties()
Some entities may have optional user-supplied Property
(name/value)
fields, such a score or string that qualifies the entity.
Generated from protobuf field repeated .google.cloud.vision.v1.Property properties = 9;
at line 330
setProperties(array|RepeatedField $var)
Some entities may have optional user-supplied Property
(name/value)
fields, such a score or string that qualifies the entity.
Generated from protobuf field repeated .google.cloud.vision.v1.Property properties = 9;