US 10,846,930 C1 (12,958th)
Using passable world model for augmented or virtual reality
Samuel A. Miller, Hollywood, FL (US)
Filed by Magic Leap, Inc., Dania Beach, FL (US)
Assigned to CITIBANK, N.A., New York, NY (US)
Reexamination Request No. 90/019,493, Apr. 23, 2024.
Reexamination Certificate for Patent 10,846,930, issued Nov. 24, 2020, Appl. No. 14/705,980, May 7, 2015.
Application 90/019,493 is a continuation of application No. 14/690,401, filed on Apr. 18, 2015, granted, now 10,262,462.
Application 14/690,401 is a continuation in part of application No. 14/331,218, filed on Jul. 14, 2014, granted, now 9,671,566.
Claims priority of provisional application 62/012,273, filed on Jun. 14, 2014.
Claims priority of provisional application 61/981,701, filed on Apr. 18, 2014.
Ex Parte Reexamination Certificate issued on Jul. 2, 2025.
Int. Cl. G06K 9/00 (2022.01); G06T 19/00 (2011.01); A63F 13/56 (2014.01)
CPC G06T 19/006 (2013.01) [A63F 13/56 (2014.09)]
OG exemplary drawing
AS A RESULT OF REEXAMINATION, IT HAS BEEN DETERMINED THAT:
Claims 1, 3, 7, 8 and 18 are determined to be patentable as amended.
Claims 2, 4-6, 9-17, 19 and 20, dependent on an amended claim, are determined to be patentable.
New claims 21-40 are added and determined to be patentable.
1. A method of displaying augmented reality, comprising:
storing, [ by an augmented reality display system of a plurality of augmented reality systems, ] into a passable world model, data that comprises (1) a set of points identifying a position and orientation of a real object, (2) a descriptor for the real object, and (3) a digital representation of a real space [ including the real object ] in a physical world in three-dimensional space, wherein
the digital representation of the real space is used to place virtual content in relation to physical coordinates in the physical world [ and is constructed by the plurality of augmented reality systems each storing respective data into the passable world model] ;
running a first object recognizer of a plurality of object recognizers on at least the set of points of the passable world model, wherein
the first object recognizer recognizes a type of objects for the real object in the physical world at least by processing the set of points based at least in part on a parametric geometry of the type of objects, without regard to a specific feature about the type of objects [ , and
the parametric geometry comprises at least the descriptor for the real object] ;
recognizing a specific object for the real object at least by processing, by the [ a ] second object recognizer, the set of points that has been processed by the first object recognizer based at least in part upon the specific feature about the type of objects and a different object that pertains to the specific feature, wherein
[ the first object recognizer or the second object recognizer generates the parametric geometry of the real object based at least in part upon a set of sparse points at least by determining and parameterizing geometry of the real object into a geometric primitive and by attaching semantic information pertaining to both the descriptor of the real object as well as a movement characteristic or a constituent characteristic of the real object and at least one of a movement characteristic or a constituent characteristic of the real object to the geometric primitive for the parametric geometry of the real object,]
the different object is of a different type that is different from the type of objects, and
the type of objects comprises multiple different specific objects including the specific object; and
displaying the virtual content [ , at least a portion of the digital representation, and the real object ] to a user of an [ wearing the ] augmented reality display system [ or a different augmented reality display system for the user to interact with at least the virtual content and the real object ] based at least in part [ upon ] the specific object that has been recognized for the real object [ , a surrounding object in the physical world where the user is located, ] and the digital representation of the real space in the passable world model [ , wherein a persistent version of the digital representation of the real space on a remote system is updated with at least the parametric geometry of the real object that has been recognized] .
3. The method of claim 1, wherein the set of points is captured by a [ the ] plurality of augmented reality display systems, and the plurality of augmented reality display systems capture [ the respective ] data pertaining to a plurality of locations in the physical world.
7. The method of claim 1, wherein a new set of points identifying the [ a ] position and the [ an ] orientation of a new real object in the real space and a new descriptor of the new real object are synchronized to the passable world model stored on a remote system, wherein the new real object is recognized in the real space by the augmented reality display system based at least in part upon a different parametric geometry of a different type of objects.
8. The method of claim 1, wherein semantic information is attached to the parametric geometry of the type of objects, and at least one object recognizer of the plurality of object recognizers parameterizes geometry of the type of object [ objects ] into the parametric geometry so that the parametric geometry of the type of objects is characterized by being resizable.
18. The method of claim 1, further comprising:
disposing, by a first user wearing a first augmented reality display system, a virtual object in relation to same physical coordinates in the physical world; and
displaying the virtual object in relation to the same physical coordinates in the physical world in the augmented reality display system of the user based at least in part upon the digital representation of the real space [ , wherein each of the plurality of augmented reality systems builds a respective digital representation of a corresponding real space to incrementally construct the digital representation for storage into the passable world model] .
[ 21. The method of claim 1, wherein the descriptor is associated with a description pertaining to at least one feature of the real object and a function served by the at least one feature of the real object.]
[ 22. The method of claim 1, wherein the user to whom the virtual content is not located in the real space in which the real object is located, and the real object together with the virtual content and the at least the portion of the digital representation displayed to the user is recreated as a digital object representation for the real object using at least the parametric geometry of the type of objects for the real object.]
[ 23. The method of claim 1, wherein the parametric geometry of the real object comprises raster imagery and a polygonal definition of the real object.]
[ 24. The method of claim 1, wherein the passable world model comprises the first object recognizer or the second object recognizer.]
[ 25. The method of claim 1, wherein the passable world model comprises the parametric geometry of the real object, raster imagery, points belonging to the parametric geometry, and at least the descriptor of the real object that has been recognized.]
[ 26. The method of claim 1, further comprising:
synchronizing the parametric geometry of the real object from the augmented reality display system to a cloud; and
reinserting the parametric geometry into the passable world model on or one or more keyframes presented to another augmented reality display system worn by a different user based at least in part upon a dynamic estimate of a position or a movement of the real object using the parametric geometry of the real object.]
[ 27. The method of claim 1, further comprising:
synchronizing the parametric geometry of the real object from the augmented reality display system to a cloud; and
rendering at least the real object as a virtual object in one or more keyframes presented to the user wearing the different augmented reality display system located at a different location than the augmented reality display system at least by reinserting the parametric geometry of the real object into the one or more keyframes, wherein no image capturing devices are required at the different location of the another user.]
[ 28. The method of claim 1, wherein the augmented reality display system includes a head-mounted subsystem including a fiber scan projector.]
[ 29. The method of claim 1, wherein the first object recognizer and the second object recognizer are simultaneously, independently run on the set of points in the passable world model to respectively recognize the type of objects for the real object and the specific object for the real object.]
[ 30. The method of claim 1, wherein the second object recognizer recognizes the specific object by using one or more inherent properties of the specific object or an ontological relationship between specific objects and types of real objects.]
[ 31. The method of claim 1, wherein displaying the virtual content to the user comprising:
receiving, at the first object recognizer or the second object recognizer, one or more two-dimensional (2D) segmented image features and a plurality of three-dimensional (3D) sparse points; and
deriving, by the first object recognizer or the second object recognizer, an object structure of the real object and a property about the real object at least by fusing the one or more 2D segmented image features and the plurality of 3D sparse points.]
[ 32. The method of claim 1, wherein displaying the virtual content further comprises:
reinserting a digital object representation of the real object that has been recognized as a part of the virtual content; and
animating the digital object representation of the real object as the part the virtual content displayed to the user wearing the augmented reality display system or the different augmented reality display system.]
[ 33. The method of claim 32, wherein displaying the virtual content further comprises:
rendering the digital object representation in animating the digital animation based at least in part upon an interaction of the user, rather than constantly updating the digital object representation based on captured keyframes captured by the augmented reality display system or the different augmented reality display system or by a third-party camera.]
[ 34. The method of claim 1, wherein displaying the virtual content further comprises:
recreating the real object as a virtual object in the digital representation of the physical world using at least the parametric geometry of the real object.]
[ 35. A method of displaying augmented reality, comprising:
storing, by an augmented reality display system of a plurality of augmented reality systems, into a passable world model, data that comprises (1) a set of points identifying a position and orientation of a real object, (2) a descriptor for the real object, and (3) a digital representation of a real space including the real object in a physical world in three-dimensional space, wherein
the digital representation of the real space is used to place virtual content in relation to physical coordinates in the physical world and is constructed by the plurality of augmented reality systems each storing respective data into the passable world model;
running a first object recognizer of a plurality of object recognizers on at least the set of points of the passable world model, wherein
the first object recognizer recognizes a type of objects for the real object in the physical world at least by processing the set of points based at least in part on a parametric geometry of the type of objects, without regard to a specific feature about the type of objects, and
the parametric geometry comprises at least the descriptor for the real object;
recognizing a specific object for the real object at least by processing, by a second object recognizer, the set of points that has been processed by the first object recognizer based at least in part upon the specific feature about the type of objects and a different object that pertains to the specific feature, wherein
the first object recognizer or the second object recognizer generates the parametric geometry of the real object as a geometric primitive and attaches semantic information pertaining to the descriptor of the real object to the parametric geometry of the real object that has been recognized,
the different object is of a different type that is different from the type of objects, and
the type of objects comprises multiple different specific objects including the specific object; and
displaying the virtual content, at least a portion of the digital representation, and the real object to a user wearing the augmented reality display system or a different augmented reality display system for the user to interact with at least the virtual content and the real object based at least in part upon the specific object that has been recognized for the real object and the digital representation of the real space in the passable world model, wherein a persistent version of the digital representation of the real space on a remote system is updated with at least the parametric geometry of the real object that has been recognized,
wherein displaying the virtual content to the user comprising:
receiving, at the first object recognizer or the second object recognizer, one or more two-dimensional (2D) segmented image features and a plurality of three-dimensional (3D) sparse points;
deriving, by the first object recognizer or the second object recognizer, an object structure of the real object and a property about the real object at least by fusing the one or more 2D segmented image features and the plurality of 3D sparse points;
determining, by the first object recognizer or the second object recognizer, a geometry of the real object;
parametrizing, by the first object recognizer or the second object recognizer, the geometry of the real object into the geometric primitive at least by attaching the semantic information to the geometric primitive, wherein the semantic information pertains to a movement characteristic or a constituent characteristic of the real object; and
identifying a surrounding object in the physical world in which the user is located.]
[ 36. The method of claim 35, wherein displaying the virtual content to the user further comprising:
resizing the geometric primitive for the real object into a resized object based at least in part upon the surrounding object in the physical world; and
estimating a positioning characteristic or a movement characteristic of the resized object by using at least parametric information pertaining to the real object.]
[ 37. The method of claim 36, wherein displaying the virtual content to the user further comprising:
reinserting the resized object into the virtual content for display to the user via the augmented reality display system or the different augmented reality display system based at least in part upon the resized object, the positioning characteristic or the movement characterized of the resized object.]
[ 38. The method of claim 1, wherein the second object recognizer recognizes the specific object for the real object without employing information or data produced by the first object recognizer.]
[ 39. The method of claim 1, wherein recognizing the specific object by the second object recognizer comprises:
identifying, from an output of the first object recognizer, information pertaining to the type of objects for the real object; and
recognizing the specific object at least further by running a specific object recognizer for recognizing the specific object on an instance of the type of objects for the real object recognized by the first object recognizer.]
[ 40. The method of claim 39, wherein recognizing the specific object by the second object recognizer comprises:
prior to running the specific object recognizer for the specific object, recognizing a generic object for the specific object at least by running a generic object recognizer for the specific object on the instance of the type of objects for the real object recognized by the first object recognizer.]