


Spacenet 5 license#
New imagery and features are added quarterly License Today, SpaceNet hosts datasetsĭeveloped by its own team, along with data sets from projects like IARPA’s Functional Map of the World (fMoW). To obtain free, precision-labeled, and high-resolution satellite imagery. Before SpaceNet, computer vision researchers had minimal options

So SpaceNet 7 predictions are actually superior to SpaceNets 4 and 6 when comparing comparable building pixel areas: a ~8 ⨉ 8 pixel square in SpaceNet 4 yields a recall of ~0.1, whereas in SpaceNet 7 the recall is ~0.55.įigure 5 plots pixel sizes directly, demonstrating the far superior pixel-wise performance of SpaceNet 7 predictions in the small-area regime (~5⨉ greater for 100 pix² objects), though SpaceNet 4 predictions have a far higher score ceiling.SpaceNet, launched in August 2016 as an open innovation project offering a repository of freely available Of course the pixel areas are different by a factor of 64 (4m / 0.5)², so a 120 m² SpaceNet 4 building is a ~20 ⨉ 20 pixel square, whereas an 1000 m² SpaceNet 7 building occupies only a ~8 ⨉ 8 pixel square. The building area histograms look similar in Figure 4 for SpaceNets 4 and 7, yet the performance curves are very different SpaceNet 4 performance asymptotes at ~120 m², whereas SpaceNet 7 asymptotes at ~1000 m² with much lower recall. Right: Winning SpaceNet 7 predictions from 4m optical data.

Middle: Winning SpaceNet 6 (originally published here) from 0.5m synthetic aperture radar data. Left: Winning SpaceNet 4 (originally published here) predictions from 0.5 optical data, here we focus on the blue (nadir) line. Comparison of building prediction recall (blue) for SpaceNets 4, 6, 7, overlaid on building histograms (red), with (IoU ≥ 0.5). This metric was illustrated in one of our SpaceNet 4 analysis blogs, see Figure 1.įigure 4. Performonce vs IOUįor all five of the SpaceNet challenges focused on buildings (SpaceNets 3 and 5 explored road networks), we used an intersection over union (IoU) metric as the basis for SpaceNet scoring. A follow-up post will dive deeper into the temporal change and tracking lessons from this challenge. We compare results to past SpaceNet challenges and note that despite the challenges of identifying small buildings in moderate resolution (4m) imagery, the pixels of SpaceNet 7 seem to overachieve when compared to SpaceNets past. In this post we dive into some of the building-level metrics for the SpaceNet 7 Multi-temporal Urban Development Challenge. SpaceNet is run in collaboration by co-founder and managing partner CosmiQ Works, co-founder and co-chair Maxar Technologies, and our partners including Amazon Web Services (AWS), Capella Space, Topcoder, IEEE GRSS, the National Geospatial-Intelligence Agency and Planet. Preface: SpaceNet LLC is a nonprofit organization dedicated to accelerating open source, artificial intelligence applied research for geospatial applications, specifically foundational mapping (i.e., building footprint & road network detection).
