FaceAnnotation
class FaceAnnotation extends Message
A face annotation object contains the results of face detection.
Protobuf type Google\Cloud\Vision\V1\FaceAnnotation
Methods
No description
The bounding polygon around the face. The coordinates of the bounding box
are in the original image's scale, as returned in ImageParams
.
The bounding polygon around the face. The coordinates of the bounding box
are in the original image's scale, as returned in ImageParams
.
The fd_bounding_poly
bounding polygon is tighter than the
boundingPoly
, and encloses only the skin part of the face. Typically, it
is used to eliminate the face from any image analysis that detects the
"amount of skin" visible in an image. It is not based on the
landmarker results, only on the initial face detection, hence
the fd
(face detection) prefix.
The fd_bounding_poly
bounding polygon is tighter than the
boundingPoly
, and encloses only the skin part of the face. Typically, it
is used to eliminate the face from any image analysis that detects the
"amount of skin" visible in an image. It is not based on the
landmarker results, only on the initial face detection, hence
the fd
(face detection) prefix.
Detected face landmarks.
Detected face landmarks.
Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
Detection confidence. Range [0, 1].
Detection confidence. Range [0, 1].
Face landmarking confidence. Range [0, 1].
Face landmarking confidence. Range [0, 1].
Joy likelihood.
Joy likelihood.
Sorrow likelihood.
Sorrow likelihood.
Anger likelihood.
Anger likelihood.
Surprise likelihood.
Surprise likelihood.
Under-exposed likelihood.
Under-exposed likelihood.
Blurred likelihood.
Blurred likelihood.
Headwear likelihood.
Headwear likelihood.
Details
at line 125
__construct()
at line 142
BoundingPoly
getBoundingPoly()
The bounding polygon around the face. The coordinates of the bounding box
are in the original image's scale, as returned in ImageParams
.
The bounding box is computed to "frame" the face in accordance with human
expectations. It is based on the landmarker results.
Note that one or more x and/or y coordinates may not be generated in the
BoundingPoly
(the polygon will be unbounded) if only a partial face
appears in the image to be annotated.
Generated from protobuf field .google.cloud.vision.v1.BoundingPoly bounding_poly = 1;
at line 159
setBoundingPoly(BoundingPoly $var)
The bounding polygon around the face. The coordinates of the bounding box
are in the original image's scale, as returned in ImageParams
.
The bounding box is computed to "frame" the face in accordance with human
expectations. It is based on the landmarker results.
Note that one or more x and/or y coordinates may not be generated in the
BoundingPoly
(the polygon will be unbounded) if only a partial face
appears in the image to be annotated.
Generated from protobuf field .google.cloud.vision.v1.BoundingPoly bounding_poly = 1;
at line 176
BoundingPoly
getFdBoundingPoly()
The fd_bounding_poly
bounding polygon is tighter than the
boundingPoly
, and encloses only the skin part of the face. Typically, it
is used to eliminate the face from any image analysis that detects the
"amount of skin" visible in an image. It is not based on the
landmarker results, only on the initial face detection, hence
the fd
(face detection) prefix.
Generated from protobuf field .google.cloud.vision.v1.BoundingPoly fd_bounding_poly = 2;
at line 192
setFdBoundingPoly(BoundingPoly $var)
The fd_bounding_poly
bounding polygon is tighter than the
boundingPoly
, and encloses only the skin part of the face. Typically, it
is used to eliminate the face from any image analysis that detects the
"amount of skin" visible in an image. It is not based on the
landmarker results, only on the initial face detection, hence
the fd
(face detection) prefix.
Generated from protobuf field .google.cloud.vision.v1.BoundingPoly fd_bounding_poly = 2;
at line 204
RepeatedField
getLandmarks()
Detected face landmarks.
Generated from protobuf field repeated .google.cloud.vision.v1.FaceAnnotation.Landmark landmarks = 3;
at line 215
setLandmarks(array|RepeatedField $var)
Detected face landmarks.
Generated from protobuf field repeated .google.cloud.vision.v1.FaceAnnotation.Landmark landmarks = 3;
at line 229
float
getRollAngle()
Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
Generated from protobuf field float roll_angle = 4;
at line 242
setRollAngle(float $var)
Roll angle, which indicates the amount of clockwise/anti-clockwise rotation of the face relative to the image vertical about the axis perpendicular to the face. Range [-180,180].
Generated from protobuf field float roll_angle = 4;
at line 256
float
getPanAngle()
Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
Generated from protobuf field float pan_angle = 5;
at line 269
setPanAngle(float $var)
Yaw angle, which indicates the leftward/rightward angle that the face is pointing relative to the vertical plane perpendicular to the image. Range [-180,180].
Generated from protobuf field float pan_angle = 5;
at line 282
float
getTiltAngle()
Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
Generated from protobuf field float tilt_angle = 6;
at line 294
setTiltAngle(float $var)
Pitch angle, which indicates the upwards/downwards angle that the face is pointing relative to the image's horizontal plane. Range [-180,180].
Generated from protobuf field float tilt_angle = 6;
at line 306
float
getDetectionConfidence()
Detection confidence. Range [0, 1].
Generated from protobuf field float detection_confidence = 7;
at line 317
setDetectionConfidence(float $var)
Detection confidence. Range [0, 1].
Generated from protobuf field float detection_confidence = 7;
at line 329
float
getLandmarkingConfidence()
Face landmarking confidence. Range [0, 1].
Generated from protobuf field float landmarking_confidence = 8;
at line 340
setLandmarkingConfidence(float $var)
Face landmarking confidence. Range [0, 1].
Generated from protobuf field float landmarking_confidence = 8;
at line 352
int
getJoyLikelihood()
Joy likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood joy_likelihood = 9;
at line 363
setJoyLikelihood(int $var)
Joy likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood joy_likelihood = 9;
at line 375
int
getSorrowLikelihood()
Sorrow likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood sorrow_likelihood = 10;
at line 386
setSorrowLikelihood(int $var)
Sorrow likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood sorrow_likelihood = 10;
at line 398
int
getAngerLikelihood()
Anger likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood anger_likelihood = 11;
at line 409
setAngerLikelihood(int $var)
Anger likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood anger_likelihood = 11;
at line 421
int
getSurpriseLikelihood()
Surprise likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood surprise_likelihood = 12;
at line 432
setSurpriseLikelihood(int $var)
Surprise likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood surprise_likelihood = 12;
at line 444
int
getUnderExposedLikelihood()
Under-exposed likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood under_exposed_likelihood = 13;
at line 455
setUnderExposedLikelihood(int $var)
Under-exposed likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood under_exposed_likelihood = 13;
at line 467
int
getBlurredLikelihood()
Blurred likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood blurred_likelihood = 14;
at line 478
setBlurredLikelihood(int $var)
Blurred likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood blurred_likelihood = 14;
at line 490
int
getHeadwearLikelihood()
Headwear likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood headwear_likelihood = 15;
at line 501
setHeadwearLikelihood(int $var)
Headwear likelihood.
Generated from protobuf field .google.cloud.vision.v1.Likelihood headwear_likelihood = 15;