On July 9, 2009, the US Patent & Trademark Office published a patent application from Apple that reveals various concepts behind a newly advanced facial detection and recognition system. Although the system described is primarily focused on use with portables such as the iPod and iPhone, it is evident that such a system will be implemented throughout Apple’s hardware lineup – and eventually spill over to future applications such as television, a vehicle navigation system, a video gaming system, video glasses and so forth, according to the patent. The system will be used to identify those in communications with users via various methods of messaging including email, instant messaging, video messaging, and/or user voice call. Yet the most advantageous aspect of this system will be found in its advanced security technologies that will go far beyond simple password protection. The user will be able to program the facial detection and recognition system so that only authorized faces could control access to specific applications, be it a spreadsheet or word processor app – or to even authorize the purchase of content at Apple’s iTunes Store.
Advanced Processing Environment
Apple makes the distinction between face detection and recognition very clear at the onset of this patent. Face detection and recognition are different processes. Face detection includes the process of detection and/or locating a face or faces within an image. Face recognition includes the process of recognizing that a detected face is associated with a particular person or user. Face recognition, however, is typically performed along with and/or after face detection.
Apple’s patent FIG. 4 shown below is a diagram of a computer processing environment 400. The processing environment may include a detection decision application 402, a face recognition decision application 404, and an input/output and/or application control application 406. The environment 400 may also include detection data 424 and recognition data 426, a face vector database 420 and/or an input/output interface configuration database 422. The detection data 424 may include, without limitation, data associated with knowledge-based detection techniques 428, feature-based detection techniques 430, template matching techniques 432, and/or appearance-based detection techniques 434.
The Input/Output Control Application
Apple’s patent FIG. 5 is a diagram of a face feature vector 500 including various facial features associated with a user or class of users according to an illustrative embodiment of the invention. The face feature vector 500 may include one or more elements such as, without limitation, eyes data 502, nose data 504, mouth data 506, chin data 508, face shape data 510, face areas data 512, face feature distance/angle/relation data 514, and/or skin color data 516.
In certain embodiments, the face feature vector may include other data associated with the detection data and/or recognition data. In one embodiment, with respect to face recognition, the vector 500 is derived from a detected face in an image, and used to identify a particular user’s face. In another embodiment, with respect to face detection, the vector is derived from a sensed image, and used to detect the presence of a face in the image.
In certain embodiments, the input/output control application determines an input interface feature and/or characteristic based on a decision signal from the decision application and/or decision application. In one embodiment, the input/output control application determines an alert output characteristic based on a decision signal from the decision application. For example, where the personal computing device is a cellular telephone, upon an incoming call, the device may sense whether the user is viewing its display. If the user’s presence is detected, the device may only provide a visual alert via the device’s display. If the user’s presence is not detected, the device may initiate an audible alert, e.g., ringtone, to alert the user about the incoming call. In this instance, the device may only apply face detection to determine whether any face is present and/or any person is viewing the device’s display.
Alternatively, if an incoming email is received by the iPhone, it may perform a face recognition step to identify the user. If the face of the user is recognized and/or authenticated, then the user is alerted about the email and the email may be made available to the user for viewing. If the face of the user is not recognized and/or authenticated, the iPhone may not initiate an email alert, and may hide, suppress, and/or block the content of the email from the unauthorized user.
Face Recognition Methodologies
The face pattern recognition application 416 may perform pattern recognition based on at least one of Bayes Decision Theory, Generative methods, discriminative methods, non-metric methods, algorithm-independent machine learning, unsupervised learning and clustering, and like techniques. The Bayes Decision techniques may include, without limitation, at least one of Bayes Decision Rule, minimum error rate classification, normal density and discriminant functions, error integrals and bounds, Bayesian networks, and compound decision theory. The Generative methods may include, without limitation, at least one of maximum likelihood and Bayesian parameter estimation, sufficient statistics, various common statistical distributions, dimensionality and computational complexity, principal components analysis, fisher linear discriminant, expectation maximization, sequential data, hidden Markov models, and non-parametric techniques including density estimation. The discriminative methods may include, without limitation, distance-based methods, nearest neighbor classification, metrics and tangent distance, fuzzy classification, linear discriminant functions (hyperplane geometry, gradient descent and perceptrons, minimum squared error procedures, and support vector machines), and artificial neural networks. The non-metric methods may include, without limitation, recognition with strings and string matching. The algorithm-independent machine learning techniques may include, without limitation, no-free lunch theorem, bias and variance, re-sampling for estimation, bagging and boosting, estimation of misclassification, and classifier combinations.
Apple credits Jeff Gonion (Campbell, CA) and Duncan Robert Kerr (San Francisco, CA) as the inventors of patent application 20090175509, originally filed in 2008.
NOTICE: MacNN presents only a brief summary of patents with associated graphic(s) for journalistic news purposes as each such patent application and/or grant is revealed by the U.S. Patent & Trade Office. Readers are cautioned that the full text of any patent application and/or grant should be read in its entirety for further details. Feed patent number xx into this search engine for further details.
Jack Purcher, MacNN Senior Patent Editor
Leave a Reply
You must be logged in to post a comment.