That means you have access to an incredible amount of 3D models, that can be loaded and referenced into ATON scenes. You can also mix them with local content or other supported open formats by the framework.
Animations are fully preserved as well as materials, so the 3D model reacts consistently to different light conditions, thanks to PBR pipeline.
Such integration is possible because the framework embraces 3D standards like glTF: let’s talk about real interoperability!
When loading a Sketchfab asset in ATON – if not already set – you will be asked (once) for an API token, that you can find into your API settings:
There are amazing news regarding ATON framework and large/massive 3D datasets.
The first one is that ATON supports multi-resolution through an OGC standard: Cesium 3D Tiles. This was under the hood for the last few years: check out the reference paper explaining more in detail such choice.
You can now load one or multiple tilesets in a 3D scene, without any restriction in terms of geometry or textures complexity. This is possible thanks to the integration of an open-source library developed under NASA AMMOS project. Here is a sample of multi-resolution dataset in an ATON scene:
The second is that such standard allows amazing integrations with other pipelines, services and tools. For instance it is possible to load 3D tilesets hosted on Cesium ION, providing a token. Here is a sample of entire Boston city (multi-resolution dataset), streamed from Cesium ION and explored through Hathor (the official ATON front-end) right in your browser, through any device:
This enables ATON 3D scenes to combine geospatial datasets available on the platform with standard glTF models or panoramic content from collections. More in general, integrations are possible with any service that provides datasets in this standard.
ATON offersimmersive VR presentation since 2016(cardboards, 3-DoF and 6-DoF HMDs). The combination of WebXR + multiresolution is fully supported: that means ATON allows to explore large, massive 3D datasets using HMDs (e.g. an Oculus Quest) or cardboards without any installation (soon dedicated paper on this). Furthermore, it is possible to query all geometry, annotate, measure… at runtime! Check out a few examples in this video:
The new version of ATON 3.0 framework includes a completely renewed “VRoadcast” component – that allows multiple remote users to collaborate in real-time inside the same online 3D scene (no installation required).
The development of VRoadcast started 3 years ago, initially to experiment collaborative features (basic chat messages) and then visualize other users as basic avatars in the 3D space. In this demo (2018) a quick communication test using VRoadcast was made involving different web browsers and… a running instance of Unreal Engine 4 with a centralized chat panel:
VRoadcast already offered incredible opportunities in the previous ATON 2.0, enabling remote users to interact in the same scene and even perform collaborative tasks right inside a standard browser. For instance it was employed to study attentional synchrony in online, collaborative immersive VR environments (Chapter 6 in “Digital & Documentation, volume 2” – open access book). VRoadcast was also used for virtual classrooms during the pandemic, allowing remote students to collaboratively populate sample 3D scenes in online sessions:
The new VRoadcast in ATON 3.0 offers improved performances, scalability, out-of-the-box collaborative features targeting CH and a simple API to create custom replicable events within web-apps (mobile, desktop and immersive VR/AR). It is now fully integrated in the official ATON front-end, so remote users and general public can already access it. More details soon in upcoming demos and papers.
These features were also employed during ArcheoFOSS workshop on ATON 3.0 to collaboratively populate a sample 3D forest and annotate together a few 3D models. Check out what ArcheoFOSS participants annotated together on this sample Venus statue!
If you are a developer, here’s a quick example to easily create a completely custom event using the new API. In this case we fire a network event called “chatmessage” containing data “hi” to other users in the same scene:
ATON.VRoadcast.fireEvent("chatmessage", "hi");
In order to handle such events, we simply subscribe to the event using:
ATON.VRoadcast.on("chatmessage", (m)=>{
alert("Received message: "+ m)
})
In this example, when one user fires the event, other remote users will handle the event by showing up an alert with the received “hi”. Notice the broadcasted data can be an arbirary object. Have a look also at ATON 3.0 examples on github, and start creating your own collaborative web-app.
During the last months several updates were rolled out for Aton – right before summer vacation. The WebVR one allows users to toggle immersive VR fruition of 3D models and archaeological areas through HMDs like Oculus Rift, HTC Vive and many others in WebVR-enabled browsers.
To get started with WebVR, you can download recent Chromium builds here and start exploring published 3D scenes through Aton Front-End. Among the goals of such feature is also realizing the so-called WebVR responsive concept, where the service adapts to your viewing environment by using fluid layouts, similar to Aton responsiveness for mobile devices (HTML5 UI, multi-touch controls, simplified shaders, etc..). Next steps will focus on proper traveling handling in order to offer a smooth HMD experience.
The WebVR update for Aton also couples with another upcoming update for the desktop application (ovrWalker), targeting performance for complex, multi-resolution 3D scenes and 3DUI research applied to CH – watch the video here. VR features in Aton will be further enriched, as well as massive upcoming updates related to node loading – so stay tuned!
This is an ongoing experiment aiming to create a fictional 3D map of the famous Game of Thrones TV show. Landscape Services were used to generate multi-resolution base terrain combined with multi-layer imagery to exploit Aton‘ simplified PBR rendering.
Click on the following image to launch the interactive exploration (WARNING: 3D map contains spoilers of previous seasons).
Starting from a base imagery map, a DEM was created from scratch (isolating water, rivers, hills, etc…): in this case no geo-referencing (through world files) is involved, since world scale and extension are… quite fictional and pretty vague. Separate scene-graphs were built to handle simplified trees (mimicking those shown in opening credits), taking advantage of instancing capabilities, as well as a water level with own multi-texturing – all composed into a global scene. This experiment also introduces the new annotation features offered by Aton, to map major locations and/or events (for now). The scene is fully evolving: digital elevation model needs some fine-tuning, new annotations and 3D models of major cities / places will be eventually added to enrich the overall composition.
Another major update for Aton is about to be deployed. A lot of work has been carried out to provide a modern, efficient and real-time PBR model. A lot of inspiration comes from Unreal Engine 4 (UE4 for short) and its advanced PBR system. WebGL world of course faces several limitations that need to be addressed, sometimes in “smart” ways or using approximation techniques (special PRT and SH solutions and much more) to reduce GPU workload.
Check out this demo, or this one.
The new, upcoming PBR system combined with RGBE model for Aton, supports now:
Base map (diffuse or albedo)
Ambient occlusion map
Normal map
Roughness map
Metallic map
Emissive map
Fresnel map
The new PBR maps workflow also has the objective to be as close as possibile to a workflow involving UE4 (or other modern real-time PBR engines), to fully reuse such maps (e.g. “Roughness“, “Metallic” pins in material blueprint in UE4). Nevertheless, the new model is also compatible with “basic” workflow, such as the classic diffuse-only 3D modeling (or diffuse + separate AO, etc..). Screenshots below show sample workflow in UE4 using same identical PBR maps applied to cube datasets:
Of course these improvements will also extend to multi-resolution datasets (e.g. ARIADNE Landscape Services) to produce aestetically pleasing 3D landscapes by providing additional maps in input section, still maintaining efficiency of underneath paged multi-resolution. Furthermore, the Aton PBR system is also VR-ready, providing a realistic and consistent rendering of layered materials also on HMDs, on WebVR enabled browsers.
To import and ingest 3D assets generated using common 3D formats (.obj, .3ds, ….and much more) into Aton system and its PBR system, the Atonizer service will be soon available.
A new massive update for WebGL Aton FrontEnd has been deployed.
The real-time lighting component – including GLSL vertex and fragment shaders – has been completely rewritten to feature a full light probing system, with advanced per-fragment effects.
The FrontEnd is now able to manage ambient occlusion maps, normal-maps, specular-maps, fog and other information to correctly render different materials and lighting effects using a modern, per-fragment approach. Note LPs also contain an emission map, to consistently define light sources and also their shapes. Consistent rendering of normal maps required special attention regarding efficient computation with GLSL fragment derivatives (this will be explained in a separate post).
You can interactively change lighting orientation by holding ALT key and moving the mouse around. Try it live with:
These new features also apply to terrain datasets and, generally speaking, to all multi-resolution 3D assets published through Aton… Have a look at this landscape generated using ARIADNE Landscape Service. Yes, that means new options will be added soon to terrain service for processing multiple maps (i.e. specular maps for lakes, rivers, etc.) for 3D terrain datasets published online.
More info and details will soon be published. Stay tuned.
The official page for WebGL Front-End – codename “Aton“ – is now live. You will find a complete description of all its latest capabilities, demos and other showcase.
A live demo has been published about the 3D interactive Massenzio Mausoleum along with metadata presentation as well – including a brief description, reference paper, a few POVs and also a youtube video.
A massive update for WebGL Front-End has been deployed. Here’s a list of improvements and new features
Improved spherical panoramas handling and support
Added double-click hotspot feature: user can now double-click any point of the 3D model to smoothly focus camera on target. On mobile tablets and smartphones the same feature is bound to double-tap – try it live!. This feature is very useful on large-scale datasets to adaptively scale and focus targets or specific items
Improved POV (Point-of-View) dialog: now the button shows an drop-down overlay with both permalink (3D scene + POV) and the bbCode – useful for text descriptions. During the exploration of the 3D scene the user can now simply press the POV icon (or press ‘c’) to show the drop-down dialog with current POV information and links ready for online sharing
New embed dialog (similar to POV dialog): it provides HTML code for use in your external web pages, enriching content with an interactive canvas
This is a bit more technical post about customizing the main PHP component, responsible for main HTML5/js Front-End presentation and deployment on multiple devices. Here is a brief list of features and capabilities offered so far as GET arguments, useful for permalinks and/or embed in external pages.
“ml”: define nodes (3D assets) to be loaded from the online collection. For instance:
ml=ref.osgjs
will load the single node “ref.osgjs” (basic coordinate system) – see here a live demo
ml=mytest/mymodel.osgjs
will load the node “mymodel.osgjs” inside the subfolder “mytest”
Multiple nodes can be grouped by “;” – Examples:
ml=mytest/modelA.osgjs;mytest/modelB.osgjs
will load both modelA and modelB
ml=mytest/
is a shorthand to load an entire folder and it will load all nodes contained in “mytest” sub-folder. Note the “/” at the end
“pano”: loads a panoramic (equirectangular) image and attach to on-the-fly spherical geometry. Examples:
pano=mypano.jpg
will load the equirectangular image “mypano.jpg” and attach to special sphere geometry – live demo
Notice you can have three different scenarios for interactive exploration:
3D models only (“ml” provided but no “pano” option provided)
Spherical panorama only (only “pano” argument provided)
Both 3D models + spherical panorama (“pano” and “ml” both provided)
“alpha”: enables a transparent background. This is useful when embedding the component in external web pages for styling purposes. Here is a demo with “ref.osgjs”: viewer.php?ml=ref.osgjs&alpha – Notice how the background is now white. Try embedding it in another web page with its own background to see the blending.
“pov”: tells the Front-End to load a starting POV (Point-of-View) by using smooth camera transition. The POV string is composed by 6 values encapsulating eye (x, y, z) and target (tx, ty, tz) positions. Examples:
pov=-1.674 -2.996 1.159 -1.733 2.900 0.121
will first load assets (a panoramic image and/or 3D models) and then load the POV – live demo
The Front-End of course provides shortcuts to provide user with current POV, thus allowing to share the current 3D scene and a specific camera position