CPC G06V 40/28 (2022.01) [G06V 20/10 (2022.01); G16H 40/20 (2018.01)] |
AS A RESULT OF REEXAMINATION, IT HAS BEEN DETERMINED THAT: |
Claims 1-3, 8-11 and 16 are determined to be patentable as amended. |
Claims 4-7 and 12-15, dependent on an amended claim, are determined to be patentable. |
New claims 17-38 are added and determined to be patentable. |
1. A method for generating virtual content in a three-dimensional (3D) physical environment of a user, the method performed by at least one processor and comprising:
based on analyzing data acquired from at least one sensor, identifying a gesture of
identifying a physical surface [ of a physical totem ] in the 3D physical environment of the user based at least partly on the
receiving an indication [ by at least a portion of a body of the user ] to initiate an interaction [ indicative of an action undertaken by the user ] with [ respect to at least a portion of ] the physical surface [ , wherein the indication pertains to a first position, a first orientation, or a first movement pertaining to the at least the portion of the body of the user relative to the physical surface] ;
[ in response to the interaction by the user, ] selecting a subset of user interface (UI) operations from a set of available UI operations associated with the physical surface based on contextual information [ , wherein the contextual information corresponds to the interaction and is ] associated with [ one or more locations of the at least the portion of ] the physical surface;
generating a display instruction for presenting the subset of UI operations as at least a portion of virtual content that is presented in a 3D view as an overlay to the physical surface; [ and
rendering at least the at least the portion of the virtual content onto or with respect to the physical surface] .
|
2. The method of claim 1, [ further comprising:
generating a first set of UI operations or structures on at least a portion of the physical surface prior to receiving the indication of the interaction, ] wherein
[ the subset of UI operations is selected from the set of UI operations to modify the first set of UI operations or structures based at least in part upon the first position, the first orientation, or the first movement of the at least the portion of the body of the user relative to the physical surface,]
the pose [ further ] includes one or more of an eye pose and a head pose [ , and
the pose pertains to at least one of a direction of the movement or the change in position or the change in orientation, a magnitude of the movement or the change in position or the change in orientation, a speed of the movement or the change in position or the change in orientation, or an acceleration pertaining to the one or more hands, the one or more wrists, or the one or more fingers of the user] .
|
3. The method of claim 1, wherein the indication to initiate the interaction with [ the at least the portion of ] the physical surface comprises one or more of an actuation of a UI device,
|
8. The method of claim 1, wherein
the subset of UI operations
one or more dimensions and a position of the virtual work portal are based at least partly on the gesture.
|
9. A system generating virtual content in a three-dimensional (3D) physical environment of a user, the system comprising:
at least one sensor configured to acquire gesture data [ pertaining to a gesture of a user] , the gesture including a pose of the user, wherein the pose comprises a movement, a position, a change in position, an orientation, or a change in orientation pertaining to one or more hands, one or more wrists, or one or more fingers of the user;
at least one processor configured to [ : ]
analyze the gesture data
identify
identify a physical surface [ of a physical totem ] in the 3D physical environment of the user based at least partly on the identified gesture, [ wherein the physical surface is external to the user,]
receive an indication [ by at least a portion of a body of the user ] to initiate an interaction [ indicative of an action undertaken by the user ] with [ respect to at least a portion of ] the physical surface, [ wherein the indication pertains to a ] first [ position, a first orientation, or a first movement of the at least the portion of the body of the user relative to the physical surface,
in response to the interaction by the user, ] select a subset of user interface (UI) operations from a set of available UI operations associated with the physical surface based on contextual information [ , wherein the contextual information corresponds to the interaction and is ] associated with [ one or more locations of the at least the portion of ] the physical surface, and
generate a display instruction for presenting the subset of UI operations as at least a portion of virtual content that is presented in a 3D view as an overlay to the physical surface; and
an augmented reality (AR) display system configured to present the subset of UI operations as at least a portion of virtual content that is presented in a 3D view as an overlay to the physical surface.
|
10. The system of claim 9, [ the at least one processor further configured to generate a first set of UI operations or structures on at least a portion of the physical surface prior to receiving the indication of the interaction, ] wherein [ the subset of UI operations is selected from the set of UI operations to modify the first set of UI operations or structures based at least in part upon the first position, the first orientation, or the first movement of the at least the portion of the body of the user relative to the physical surface, and ] the pose includes one or more of an eye pose [ , ] and a head pose.
|
11. The system of claim 9, wherein the indication to initiate the interaction with [ the at least the portion of ] the physical surface comprises one or more of an actuation of a UI device,
|
16. The system of claim 9, wherein
the subset of UI operations
one or more dimensions and a position of the virtual work portal are based at least partly on the gesture.
|
[ 17. The method of claim 1, further comprising:
mapping the first position, the first orientation, or the first movement of the at least the portion of the body of the user relative to the physical surface to one or more respective UI operations or structures; and
mapping the first position, the first orientation, or the first movement of the at least the portion of the body of the user relative to the physical surface to one or more respective inputs based at least in part upon a result of mapping the first position, the first orientation, or the first movement of the at least the portion of the body of the user relative to the physical surface, wherein
the one or more respective inputs comprise at least one of a character, a number, a punctuation, a control, or a function.]
|
[ 18. The method of claim 1, further comprising:
respectively rendering different subsets of UI operations or structures in response to different interactions by the user, wherein respectively rendering different subsets of UI operations or structures comprises rendering a new set of UI operations or structures based at least in part upon a corresponding interaction.]
|
[ 19. The method of claim 18, wherein respectively rendering the different subsets of UI operations or structures comprises:
respectively mapping positions, orientations, or movements of a plurality of fingers of the user relative to the physical surface to respective UI operations or structures; and
respectively mapping the positions, orientations, or movements of the plurality of fingers of the user relative to the physical surface to a plurality of respective inputs based at least in part upon the respective UI operations or structures, wherein
the plurality of respective inputs comprises two or more of a character, a number, a punctuation, a control, a function, or a combination thereof.]
|
[ 20. The method of claim 1, wherein identifying the physical surface in the 3D physical environment of the user based at least partly on the gesture comprises:
identifying a subset of data pertaining to a portion of the 3D physical environment based at least in part upon the gesture, wherein the subset of data includes a set of points;
recognizing a first type of surface with a first object recognizer with a first object recognizer; and
recognizing the physical surface from a plurality of subtypes of the first type using a second object recognizer, wherein the plurality of subtypes are different from each other.]
|
[ 21. The method of claim 1, further comprising:
recognizing a first gesture by the user to interact with a virtual UI construct;
selecting a user selectable virtual room or virtual space-based application or function from the virtual construct based at least in part upon the first gesture from the user;
receiving a second gesture for navigating through a plurality of context-specific virtual rooms or context-specific spaces, wherein the plurality of context-specific virtual rooms or context-specific spaces is respectively associated with a plurality of plurality of user selectable virtual room or virtual space-based applications or functions;
rendering a context-specific virtual room or a context-specific virtual space that includes a fully functional construct for the application or the function to at least one eye of the user; and
providing a notification to the user that the user is in the context-specific virtual room or the context-specific virtual space.]
|
[ 22. The method of claim 21, further comprising:
populating the subset of UI operations or structures with a virtual workstation, wherein the virtual workstation comprises the plurality of user selectable virtual room or virtual space-based applications; and
providing navigation to each of the plurality of virtual room or virtual space-based applications or the plurality of context-specific virtual rooms or context-specific spaces in the subset of UI operations or structures.]
|
[ 23. The method of claim 1, further comprising:
mapping one or more characteristics of the interaction to one or more respective operations or structures in the subset of UI operations or structures, wherein
the one or more characteristics of the interaction comprise a number of actions pertaining to the interaction, a type of the interaction from a plurality of types of interactions, or a temporal duration of the interaction,
the subset of UI operations or structures includes one or more keys, one or more buttons, one or more scroll wheels, one or more joysticks, or one or more thumb sticks,
the physical surface is devoid of physical or electronic structures that correspond to the subset of UI operations or structures, and
the interaction includes performing at least one of tapping, touching, double tapping, short tapping, or long tapping the physical surface with a part of a body of the user, fingertip gripping, enveloping grasp of one or more fingers of the user.]
|
[ 24. The method of claim 1, further comprising:
rendering an emphasis on the physical surface for indicating that the physical surface has been identified based at least in part upon the gesture; and
rendering one or more demarcations on the physical surface for orienting the physical surface or the subset of UI operations or structures, wherein
the interaction comprises an action performed by the user on at least a portion of the subset of UI operations or structures,
the gesture includes at least one of a user touching the physical surface, a user presenting a pose with one or more arms or one or more fingers with respect to the physical surface, and
the physical surface includes one or more indents or depressions at one or more respective locations at which one or more UI operations or structures are rendered.]
|
[ 25. The method of claim 1, wherein the gesture is performed by the user when at least a portion of the one or more hands, one or more wrists, or one or more fingers of the user is in contact with the physical totem.]
|
[ 26. The method of claim 25, further comprising:
detecting a first interaction with a first location on the physical totem, wherein
the first location on the physical totem corresponds to a first UI operation of the subset of UI operations;
issuing a computer command to execute the first UI operation in response to the first interaction, wherein
the physical totem comprises a physical object that has no physical or electronic structures that correspond to the first UI operation for implementing the first UI operation.]
|
[ 27. The method of claim 25, wherein the physical totem comprises a totem controller that includes functionality to facilitate self-tracking of the physical totem, wherein the totem controller is mountable to a plurality of different physical objects including the physical totem so as to transform the plurality of different physical objects into corresponding totems.]
|
[ 28. The method of claim 25, further comprising replicating an electronic controller from the totem object to provide a plurality of actual or non-virtual actions at least by rendering at least the subset of UI operations which, when interacted upon by user interactions via corresponding gestures with respect to corresponding locations on the physical totem, respectively correspond to the plurality of control functionalities onto or with respect to the physical surface of the physical totem.]
|
[ 29. The method of claim 25, wherein the physical totem includes one or more physical structures, features, or demarcations that respectively correspond to one or more UI operations of the subset of UI operations, wherein the one or more UI operations are respectively rendered onto or with respect to the physical surface of the physical totem based at least in part upon the one or more physical structures, features, or demarcations of the physical totem.]
|
[ 30. The method of claim 25, wherein
the subset of UI operations is rendered onto or with respect to the physical surface of the physical totem to produce a petal-shaped virtual user interface having a plurality of petals that appears to emanate from the physical totem, and
each petal of the plurality of petals is respectively mapped to one or more UI operations that respectively correspond to one or more functions, one or more categories of functions, one or more categories of contents or media types, one or more tools, or one or more software applications.]
|
[ 31. The method of claim 25, wherein the physical totem comprises a physical element or feature, the physical element or feature replicates, when interacted upon by the user, a feel of a physical input element or feature but is not connected to physical or electrical switches or electronics for performing input functionalities, and at least one UI operation of the subset of UI operations is rendered onto or with respect to the physical element or feature as virtual content to provide an input functionality to the physical element or feature.]
|
[ 32. The method of claim 25, wherein the physical totem comprises a plurality of physical faces or surfaces, and each of the plurality of physical faces or surfaces respectively corresponds to one or more virtual UI operations, a function, a category or group of functions, a category of content types or media types, one or more tools, or one or more applications.]
|
[ 33. The method of claim 32, further comprising:
receiving the interaction by the user, wherein at least a portion of the subset of UI operations is rendered as an overlay on a first physical face or surface when the first physical face or surface is within a field of view of the user at a beginning of the interaction;
in response to the interaction, ceasing rendering the at least the portion of the subset of UI operations with respect to the first physical face or surface when the interaction by the user rotates the first physical face or surface out of the field of view of the user; and
in response to the interaction, rendering a new subset UI operations from the set of available UI operations onto or with respect to a second physical face or surface when the interaction by the user rotates the second physical face or surface into the field of view of the user.]
|
[ 34. The method of claim 25, wherein the physical totem comprises a shape of a handheld controller, the physical totem comprises a number of user input elements that includes at least one of a switch, a button, and a joy-or thumb-stick, the switch and the joy-or thumb-stick are not connected to any switches or electronics, and the switch and the joy-or thumb-stick, when at least some of the subset of UI operations are rendered onto or with respect to the switch and the joy-or thumb-stick and when interacted upon by the interaction, respectively perform a switch function and a control function.]
|
[ 35. The method of claim 25, wherein the physical totem comprises a shape of a handheld controller, the subset of UI operations comprises a number of virtual user input elements, and the physical totem comprises a corresponding number of physical features or elements that respectively corresponds to the number of virtual user input elements, and the number of physical features includes at least one of a depression, a texture, a protrusion, or a cavity that, when the number of virtual user input elements are rendered onto or with respect to the number of physical features, replicates a feel by the user of the number of virtual input elements.]
|
[ 36. The method of claim 25, wherein the physical totem comprises a shape of a ring having a first tubular portion for receiving a finger of the user and an interaction portion or a shape of a bracelet having a second tubular portion for receiving a wrist of the user and a touch surface, at least a first part of the subset of UI operations is rendered onto or with respect to the interaction portion or the touch portion of the physical totem, at least a second part of the subset of UI operations is rendered onto or with respect to the physical totem as emanating from the physical totem, and the physical totem has no physical input structures or physical electronics while providing input functions with the subset of UI operations in response to the interaction by the user.]
|
[ 37. The method of claim 25, wherein the physical totem comprises a shape of a glove or a partial glove that has an opening for receiving a wrist of the user and one or more glove fingers for receiving one or more respective fingers of a hand of the user, and the subset of UI operations is rendered onto or with respect to the glove or the partial glove.]
|
[ 38. The method of claim 37, further comprising:
tracking the glove or the partial glove, instead of hands and fingers of the user;
mapping the first position, the first orientation, or the first movement pertaining to the glove or the partial glove to a set of user selections or inputs; and
mapping one or more other interactions by the user to one or more corresponding controls or functions, wherein the one or more other interactions by the user comprise a number of interactions by the user, a type of a plurality of interaction types, a duration of a respective interaction of a plurality of interactions.]
|