Loading...
Please wait, while we are loading the content...
Similar Documents
Multiple Stage Image Based Object Detection and Recognition
| Content Provider | The Lens |
|---|---|
| Abstract | Systems, methods, tangible non-transitory computer-readable media, and devices for autonomous vehicle operation are provided. For example, a computing system can receive object data that includes portions of sensor data. The computing system can determine, in a first stage of a multiple stage classification using hardware components, one or more first stage characteristics of the portions of sensor data based on a first machine-learned model. In a second stage of the multiple stage classification, the computing system can determine second stage characteristics of the portions of sensor data based on a second machine-learned model. The computing system can generate an object output based on the first stage characteristics and the second stage characteristics. The object output can include indications associated with detection of objects in the portions of sensor data. |
| Related Links | https://www.lens.org/images/patent/US/20230004762/A1/US_2023_0004762_A1.pdf |
| Language | English |
| Publisher Date | 2023-01-05 |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Patent |
| Jurisdiction | United States of America |
| Date Applied | 2022-09-12 |
| Applicant | Uatc Llc |
| Application No. | 202217942898 |
| Claim | -20. (canceled) An autonomous vehicle control system for an autonomous vehicle, the autonomous vehicle control system comprising: one or more processors; and one or more non-transitory, computer-readable media storing instructions that are executable to cause the one or more processors to perform operations comprising: receiving sensor data descriptive of an environment of the autonomous vehicle; determining in a first stage of a multiple stage classification, one or more first stage characteristics of the sensor data based in part on a first machine-learned model, wherein the first stage characteristics are determined by the first machine-learned model with a first level of confidence; determining in a second stage of the multiple stage classification, one or more second stage characteristics of the sensor data based in part on a second machine-learned model, wherein the second stage characteristics are determined by the second machine-learned model with a second level of confidence that is higher than the first level of confidence; and generating an object output based in part on the second stage characteristics, the object output indicating detection of one or more objects in the sensor data. The autonomous vehicle control system of claim 21, wherein the one or more first stage characteristics of the sensor data determined in the first stage of the multiple stage classification are indicative of a likelihood that the sensor data contains objects. The autonomous vehicle control system of claim 21, wherein the one or more first stage characteristics of the sensor data determined in the first stage of the multiple stage classification are indicative of one or more portions of the sensor data being classified as background or foreground. The autonomous vehicle control system of claim 21, wherein the one or more second stage characteristics of the sensor data determined in the second stage of the multiple stage classification are indicative of an object classification for a type of object detected in the sensor data. The autonomous vehicle control system of claim 21, the operations further comprising: generating in the first stage, a heat map associated with the sensor data, the heat map describing a probability of an object being contained within a respective area of the sensor data. The autonomous vehicle control system of claim 21, wherein an input to the second stage of the multiple stage classification is associated with one or more foreground portions of the sensor data. The autonomous vehicle control system of claim 21, the operations further comprising: generating, in the first stage and based in part on the sensor data, visual descriptor output associated with the sensor data, the visual descriptor output comprising color hue information, color saturation information, brightness information, or histogram of oriented gradients information, wherein the one or more first stage characteristics are determined based in part on the visual descriptor output. The autonomous vehicle control system of claim 21, wherein the sensor data comprises one or more LIDAR features and one or more camera features. The autonomous vehicle control system of claim 21, the operations further comprising: controlling a motion of the autonomous vehicle based in part on the object output. A method comprising: receiving sensor data descriptive of an environment of an autonomous vehicle; determining in a first stage of a multiple stage classification, one or more first stage characteristics of the sensor data based in part on a first machine-learned model, wherein the first stage characteristics are determined by the first machine-learned model with a first level of confidence; determining in a second stage of the multiple stage classification, one or more second stage characteristics of the sensor data based in part on a second machine-learned model, wherein the second stage characteristics are determined by the second machine-learned model with a second level of confidence that is higher than the first level of confidence; and generating an object output based in part on the second stage characteristics, the object output indicating detection of one or more objects in the sensor data. The method of claim 30, wherein the one or more first stage characteristics of the sensor data determined in the first stage of the multiple stage classification are indicative of a likelihood that the sensor data contains objects. The method of claim 30, wherein the one or more first stage characteristics of the sensor data determined in the first stage of the multiple stage classification are indicative of one or more portions of the sensor data being background or foreground. The method of claim 30, wherein the one or more second stage characteristics of the sensor data determined in the second stage of the multiple stage classification are indicative of an object classification for a type of object detected in the sensor data. The method of claim 30, further comprising: generating in the first stage, a heat map associated with the sensor data, the heat map describing a probability of an object being contained within a respective area of the sensor data. The method of claim 30, wherein an input to the second stage of the multiple stage classification is associated with one or more foreground portions of the sensor data. The method of claim 30, further comprising: generating, in the first stage and based in part on the sensor data, visual descriptor output associated with the sensor data, the visual descriptor output comprising color hue information, color saturation information, brightness information, or histogram of oriented gradients information, wherein the one or more first stage characteristics are determined based in part on the visual descriptor output. The method of claim 30, wherein the sensor data comprises one or more LIDAR features and one or more camera features. The method of claim 30, further comprising: controlling a motion of the autonomous vehicle based in part on the object output. One or more tangible, non-transitory computer-readable media storing computer-readable instructions that when executed by one or more processors cause the one or more processors to perform operations, the operations comprising: receiving sensor data descriptive of an environment of an autonomous vehicle; determining in a first stage of a multiple stage classification, one or more first stage characteristics of the sensor data based in part on a first machine-learned model, wherein the first stage characteristics are determined by the first machine-learned model with a first level of confidence; determining in a second stage of the multiple stage classification, one or more second stage characteristics of the sensor data based in part on a second machine-learned model, wherein the second stage characteristics are determined by the second machine-learned model with a second level of confidence that is higher than the first level of confidence; and generating an object output based in part on the second stage characteristics, the object output indicating detection of one or more objects in the sensor data. The one or more tangible, non-transitory computer-readable media of claim 39, wherein: the one or more first stage characteristics of the sensor data determined in the first stage of the multiple stage classification are indicative of one or more portions of the sensor data being background or foreground; and the one or more second stage characteristics of the sensor data determined in the second stage of the multiple stage classification are indicative of an object classification for a type of object detected in the sensor data. |
| CPC Classification | IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING ELECTRIC DIGITAL DATA PROCESSING SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES IMAGE DATA PROCESSING OR GENERATION; IN GENERAL COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS |
| Extended Family | 087-023-160-267-037 157-126-040-403-005 133-028-980-059-646 067-040-845-155-045 140-750-203-071-087 040-973-693-957-704 058-828-662-574-99X 082-287-457-068-167 000-088-552-500-927 146-274-379-110-91X 090-591-744-668-735 |
| Patent ID | 20230004762 |
| Inventor/Author | Vallespi-gonzalez Carlos Amato Joseph Lawrence Totolos Jr George |
| IPC | G06N7/00 G06N20/00 G06T7/521 G06T15/08 G06V10/28 G06V10/50 G06V10/56 G06V10/764 G06V20/58 G06V20/64 |
| Status | Active |
| Owner | Uatc Llc Uber Technologies Inc Aurora Operations Inc |
| Simple Family | 087-023-160-267-037 090-591-744-668-735 133-028-980-059-646 082-287-457-068-167 140-750-203-071-087 040-973-693-957-704 058-828-662-574-99X 067-040-845-155-045 000-088-552-500-927 146-274-379-110-91X |
| CPC (with Group) | G06V20/64 G06V20/58 G06V10/28 G06V10/50 G06V10/56 G06V10/764 G06F18/24323 G05D1/0238 G06T7/521 G06T15/08 G06T2207/20081 G06T2210/12 G06T2207/30261 G06N20/00 G06V20/584 G06F18/241 G06F18/214 G06N7/01 G05D1/628 |
| Issuing Authority | United States Patent and Trademark Office (USPTO) |
| Kind | Patent Application Publication |