VRML Architecture Group

Meeting 21 - 23 August 1995
Half Moon Bay, California

NOTE: These are UNEXPURGATED notes.
They give an indication of the features in the VRML 1.x/2.0 draft (which will be released on 15 September), but please don't take them for Gospel.


Session One - Monday 21 August 1995

Opening 8:30 AM

Mitra talking about the press release and working for support.
The sense of the group is that it's a little too early for this 
announcement.
Jan is a little late.

Morning things:

Hi/hello/introduction from the various participants:
What are we here for?  Agreed on goals?
Agreeing on an agenda.

We decide how much power a review board has?

Consortium: (Rikk sez)

Money? - donated
Organizational crap. - useless
Technical review. - important

How does the technical review process work (we keep on coming back to 
that)

Introdutions:

Gavin - VRML is cool -  focused, clear goals, make VRML 1 really work, 
for everybody everywhere.  Specify some design principles and 
testings;
 try to grow an ecology of solutions.  Review process is a great 
concept

Tom - Educational representative, Brown University.  
Behavior languages, real-time systems, MUDs.  
Basic ideas for architecture for 2.0 done this meeting.  
Wait too long there will be about 15 different proposals.

Tony - Involved in VRML for a while; agree with Gavin; 
didn't expect the hype, the expectation bar is high, 
want to be able to meet it - a conservative approach - 
get it all done by the end of this year.  Some idea on how to proceed 
on 2.0.  We have time to think things out.

Bill - Work on audio extensions to VRML; representing perceptual 
psychology.  The human factors side of design, not enough attention is 
paid to these issues.  Systems that make sense for engineering, but 
not for humans.  Humans are programming this stuff.  VRML is about 
communication.  Done a lot of work in spec'ing out APIs for 3D audio.  
Did the Creative API for 3D sound.

Jan - Wants to add 3D graphics to things like the X-Window system; 
read spec last November - proposed a SIGGRAPH course - 
keeping up with the homework.

Rikk - At SGI, started on VRML in 1980; this is phase one 
of the master plan.  Create a completely different user interface 
(NFI)
for devices.  The graphics is the easy part.  This group should be a 
technically focused group; politics will kill us - we have to be true 
to the technology.  Get into politics we'll be dead.  It will create 
a conflict of interest.  We want to talk about requirements, 
not design.  We'd need to meet more often if we're going to be a 
technical design time.  We need to work on requirements here.  
Spend time thrashing on design.  Andy Van Dam - entering the "winter" 
of VRML; we have captured the world with VRML 1.0; there are technical 
issues, but the implementations aren't working.  We have to make 
VRML 1.0 work!  We have suggested what's wrong and what we should be 
focused on.  There is skepticism about if there is another toy.  
The user experience and the content should be the sole important 
feature of the experience.  The second great sin of CG is getting 
back response time.  We are talking about real-time interactive 
experience.  I don't think we'll design 2.0 in the next three days 
or the next three months.  It's easy to get something working in 
our own lab; it's much more difficult to get things working for the 
masses.

Jon - Representing Microsoft; echo Rikk's sentiment about Politics.  
Edge of a great opportunity.  Great chasm, as well, in that we're 
losing response time.  Content quality is not good enough.  
Here to guard consumer-friendly implementations.  Make sure that 
millions of people are going to love it.  Resolving the things that 
are wrong, getting it done soon - full work by year end, with a few 
extensions.  A few practical things that everyone has talked about.  
We can define the scope of what we want for it.

Mitra - Online systems for 12 years; focus on how do you bring 
non-techincal people into online systems?  Interested in building VRML 
worlds; 1.0 and 1.1 is not that interesting.  How to add behavior and 
multi-user stuff; goal of this meeting to come out with a 1.1 
specification.  Crucial to develop a 2.0 architecture; Worlds and 
others will be building behaviors into VRML almost immediately.  
Worried about fragmentation in a market that is still forming; 
we must keep the standards process up to what the customers want.   
In the Internet standards process for a long time.

Brian - Trying to provide graphics APIs for lots of people is 
difficult; focus on making VRML easy to use and complete.  
If not easy to use, people will ignore it; easy for non-CG types.  
Get away from this programming like architecture.  Need an automatic 
VRML generation process for things to really explode.  Make sure 
Autodesk is ready for the future, but actually here for the future 
of the community.

Goals:

What is VRML?  
Is 2.0 more than just a file spec?  
Networking that hasn't been invented yet.  
There are plenty of examples of these things.  
What is it for?  A way to make the Internet kinder and gentler.  
Does it need to be fully multiuser and fully interactive?  
What are we trying to do with this thing?  
A system for building 3D worlds which are interoperable.  
What is HTML?  
How do we define what the 3D goals are of what we're trying to do?  
If you define those 3D goals, you're defining the interoperability 
standards.

Taking a look at Rikk & Gavin's docs on the vision and mission 
statement, we have to keep everything focused.  Agreements on 
prinicples.

Vision, then mission, but first, process

How should we handle resolving issues?

Consensus (last resort voting)  - does this work?
Mitra says that it seems to be working at IETF.
Brian says that the decisions have to be drawn up
We'll do up a list of recommentations with our signatures upon 
them as past our approval.
Consensus on consensus plus voting.

Name that group?
Tony suggests "VRML Architecture Group" - VAG or VRMLAG

gavin@engr.sgi.com
twm@cs.brown.edu
dagobert@netcom.com
wlm@netcom.com
jch@oki.com
rikk@sgi.com
jonmarb@microsoft.com
blau@bluerock.com
mitra@words.net

Is this the right group?
We need serious input from the design tools community; 
we need to make sure those voices are here.
We do think that we go forth to expand into sub-cabals...
Do we want to vote to add new members to the committee?
We need to add people at a later time, perhaps?

VISION

We have be inclusive (Mitra); he's concerned with the topology 
between worlds.  There might be no continuous space?

A vision statement - drawn up by Mark - the quality of the experience 
is closest to the Black Sun.

Address the builders of these worlds - and the users of them (Brian).  
Should be in the vision statement?
Regular folks can build worlds of some kind; professionals can use 
better tools.  

Ideas tossed out:

3D Interface to the net
3D Interface to the desktop
System for virtual worlds
Single system for planetary management
Black Sun.
Information visualizaton with phase space things.
Sensualized interface to perceptualized space.
Collaborative creation.
Enabling technology for people who can't use computers now. - 
engaging spatial capabilities.
The Intergalactic Shopping Mall

Spec is a spec, programming manual is a programming manual.
Vision statements (Brian) is just a paragraph, etc.

GOAL:

How we want to accomplish things:

1.0 Clarification document.
Identify 1.1 consensus areas versus grey areas
Define a framework for 2.0 - an abstract (just do something) 
Publish our vision, mission and goals...

Brian will publish to the relevant USENET news groups
Should we have a joint statement about what the VAG is and how this 
meeting went.  (Executive summary.)
All comments/notes for review have to be out Thursday night; 
back again on Friday night.

Tight 1.1 / Loose 2.0?  
September 15th is the first draft
Comment period until Friday October 13th 1995
15 - 17 October is 2nd VAG meeting
November 1st second draft
Close of comment Monday 13 November 1995
Friday December 1st final draft.

Create an email group with everyone.
Put these dates at the top of the comments

Working on an architecture which subsumes the VRML format. 
Authors should be able to create their own rules in their 
environments.
Building a physics system that interoperates with multiple browsers.

We're all going onto the VRML products list.

TERMS: (precisely as worded)

Browser is a generic term for the application running on the users's 
machine that is outside of the control of the author of the world.  

An interactive environment responds to the user within 
human expectation.

Multiuser - ???

Behavior - ???

MISSION STATEMENT:

(precisely as worded)

The VRML Architecture Group mission is to establish VRML as the 
world's most reliable, useful, and widely used open specification 
for interactive 3D on the Internet and the World Wide Web, 
and to chart a course for the future versions of VRML.

Josie Warneke's name is raised by Brian as someone who will be 
turning these docs into a human-readable format.

AGENDA:

Monday 21 August 1995 Morning 8:30 AM -> 12:30 PM

Introductions
Parlimentary Procedures
Selection of a Name
Visions
Mission Statement
Goals
Establishing an Agenda (Mitra)


Monday 21 August 1995 Afternoon 1:30 -> 6:30 PM

Discussion of 1.0 unresolved issues ( 1 hour )

Discussion of 2.0 issues ( 1.5 hours )

Discussion of 1.1 issues ( 1.5 hours )

6:00 PM Plan Tuesday's agenda


Tuesday Morning 8:30 AM -> 12:30 PM

Tuesday Afternoon 1:30 -> 6:30 PM

Wednesday Morning 8:30 AM -> 12:00 PM

Draft joint statement of VAG meeting.


FREESTYLE in the morning section:

Rikk wants to talk about principles.  User Experience, Performance, 
Scalability, Simplicity, Editability.  These are important to him. 
The user's experience, versus the author's control.

Mitra wants to ensure that the author can do what he wants and be as 
expressive as they want.  The end result has to appeal to both world 
creators versus world users.  We've decided to add Diversity of 
application as the last bullet item on the list.

Diversity of applications as supported within the context of the 
language as defined.

Extensibility - so that VRML can continue to serve the needs 
of a growing community.

Now onto Rikk's "Measuring Success" list

Added a 10th metric, same as number 3, but VRML authoring tool...
Changed the 8th metric to a "vocabulary for VRML"
9th Metric changed to "a viable and thriving VRML industry"
11th Metric is "VRML is the market leader in the areas outlined in the 
goal statement"

Session two - Monday afternoon VRML 1.0

Going through Bernie's paper, extracting 1.0 items.

Image types

Material mapping

Cycling in MaterialBinding

Per Vertext material binding.

LOD ambiguity

Matrix Transform

Transparancy in textures

Extensability

Cameras

Language Basics

MFFloat and MFString

Lighting seperation

Lighting models

LightModel Node

From Brian's paper "VRML1.X and beyond - Proposal Mania"

Brian will provide text of ones approved without comment - too fast to take notes.


BEHAVIORS and 2.0

3 important behavioral events

Picking/Selection, Regional/Trigger, Time-based
Do we want a manual of VRML common practices?

vrml 2.0

java is moving toward an open policy
sun really wants java to happen
showed ??? at s95
they have an application that calls (ice there rendering engine) 
a rendering api
the top level is an application, not a browser

behaviors are java apps, that solves the problem of how to 
add behaviors to java

what is the next thing after getting 3d on the web
its DIS without the whole dis thingy

don't use the world "simulation" because of rollbacks

mitra describes rollbacks

discussion of dis and dead recokning

2 areas

multi users and behaviors

gavin proposal, spec out the broad areas ...
user (navigation) and performance

start with the user, how do they effect things in the world
start with the interface

navigation can be specified as part of the content

are devices important in the content?  how to control your 
own avatar?

are physcial devices important? yes, part of the user issues

user interface	
multiple people viewing things changing	
use interface widigts

how are these wired together?

behaviors, how does this relate to user interface

time is key

multiuser

here is the list
* widgets
* devices
* behaviors	
simple physics	
global (one world)	
metaphysical
* time
* multiuser

(A whiteboaard chart reproduced here)

vrml 1.0 ................... api ................... black sun

sensors			keyframes
connections		mulituser
physics			engines	OLE	JAVA

vrml file format		vrml api		vrml tp

how to define the api, what is beyond the api and how to describe 
things on the left side of the api.

get some very basic behaviors working such as connections and 
sensors, not constraints, make it just complex enough to make the 
worlds complex

sensors are defined by tony the mitra takes over

2 kinds of sensors, user selects and user navigates
( region sensor and a pick sensor)
a connect node binds the value of one property to another
a trigger node binds the change in a value to a behavior happening

what are the conerns, what happens with the user changes the 
connections to the triggers

is this degenerating into a language?

time can be used to describe some behaviors that can be done 
easily with keyframes

gavin gives of a demo of a rotator node using engines

summary ...

3 issues
picking selection
regional proximity
time based animation (continuous animation)

onto manipulating the scene graph ...
toasters
object models and names

most proposals for names are similar, names allow you to refer 
to an object, no way to specify if to point to use an object to 
point to an object

will spend time on naming tommorrow

widgets ... people will want to put dashboards in to the worlds
best specified using an api, on the right side of the line

make a change to wwwanchor	
camera position at and of url

TUESDAY Morning 22 August 1995

Sound

Bill - sound is not just a matter of turning on sounds and playing it.  
Bill's here to educate us about this.

Bill offers a proposal for new sound nodes that requires minimal 
changes to VRML in the 1.1 time frame.

For example, a Receive node that can be attached to the camera 
to follow the listener whenever he moves.

Rikk suggests adding 'on' field to SpotSound, to make it 
more like SpotLight.  (Or, alternatively drop 'on' from SpotLight).

Discussion of order-N complexity of sound range checking 
(RangeOfEffect).

Do we specify drop-off rates for SpotSound?

Rikk - model for attenuation?  Does the browser choose it 
or the content?  

Bill - we have to assume some sophistication for audio rendering.

Mitra - we should have a guideline for 'dumb' browsers so they can 
handle attenuation.

Rikk - 20% of cpu is crunched with a soundful world

Bill - there's a huge range of capability in the hardware out 
there right now; cheap cards will place heavy demands on the CPU

Suggestion for a description field that can be displayed in the 
browser if they can't do a sound.

new fields suggestion:  name (vs. inline), intensity, description
direction = 0 0 0 and AngleOfEffect = 0 (360?) means a point source
rangeOfEffect = -1 means ambient
suggestion to combine PointSound and SpotSound into 
one node - SoundEmitter


Rikk describes Rooms or Quadrants that separate Lights, Sounds, 
Behaviors etc.  so that their effects can be separated out from the 
rest of the environment (currently a Separator is used to do this 
with lights)

Mitra also discusses segregating the space with textures to 
represent objects outside a door, etc.

The grand new SoundEmitter proposal:
	
SoundEmitter {		
name			# SFString		
description		# SFString		
intensity		# SFFloat 1.0		
location		# SFVec3F 0 0 0		
direction		# SFVec3F 0 0 0		
maxRangeOfEffect	# INF		
minRangeOfEffect	# INF		
angleOfEffect		# 2PI	
}

Point : 0 minRangeOfEffect, N maxRangeOffect
Spot :  0 minRangeOfEffect, N maxRangeOffect, angleOfEffect
Ambient : 0 0 0 location
Gavin and Rikk are against Ambient sounds unless we can scope the sounds
Receiver node - suggestion to move the rangeOfHearing to Environment node, 
or Camera (bad idea, we think) or scrap it completely.  
Suggestion to scrap the receiver node
We decided to make the sound specification an experimental set 
of extensions to the current spec
We still need to deal with sound attributes and playback
Multimedia discussion
-----------------------------
MediaAttributes {
	looping
	start - offset
	end - offset
	volume 
	speed
	on
}
these attributes to apply to any media type that gets loaded within scope
Big discussion about why we want the playback attributes separate 
from the node to playback
Tom - it's easier to keep integrated video/audio synchronized if they are 
sharing a set of attributes; also it's consistent with other Property nodes
Should Texture2 emit sound?
[lots of syntax argument here]
Requirements:  need to specify audio/video synchronization; 
do we need a general sync mechanism?
Scenarios:  one of more animations synchronized with the same sound source
Suggestion:  have syncWith MFNode pointers to keep a media type in sync with other nodes
Rikk points out that solving synchronization within the world will be hard

[debate about how hard it is to synchronize]

Gavin - can we add SFNode, MFNode to enable an experimental multimedia 
extensions - to be able to synchronize audio and video

We acheived consensus that Tom's proposal needs to be built 
as an experimental extension in the 1.1 time frame, 
and that time-based media need to be considered 
within a general time spec

Graphics Session
------------------------

Tony talked about what we want in 1.1, 
as a prelude to further discussions:
   Fix broken things
   Review existing extensions for Inclusions
   Add things we like from OI
   "Simple Reality"
   Enabling syntax for API (1.1 -> 2.0)
       Naming, 
       DEF/USE, 
       URNs
   Internationalization
   Access
   Object Model  

Jon suggested we might be biting off too much with 1.1

Rikk sauggested that the goal of 1.1 is to define the minimal number 
of extensions to make VRML minimally useful.

What is the minimum content?
    [further discussion]

We discussed how one would build something like Myst in VRML 1.1, 
as a minimal test.  HTML was boring before there were forms.

Trouble spots:

   Group nodes, leaking properties.  Should we deprecate them?
       These are: Group, Switch, LOD, TransformSeparator

       If we define them to act as separators, little changes,	
 but lights are harder to specify.

       Jon suggested adding a property to a light that selectively	
 gets rid of scoping.

       We may need ways for objects to refer to other objects'	
 coordinate system.

       Getting rid of this may break things	
 (but we've already deprecated LOD group-behaviors)

       LOD, Switch can become separators

       Keep supporting group, but deprecate  (Tony)

       We should add stuff into qvlib to map from 1.0->1.1, 
       this is easy if we map from group to separator(Gavin)

       Mitra discussed various approaches to supporting multiple	
 versions of code

       Mitra suggeted an example of a 1.1 world that inline 1.0 worlds

       Converters are hard to implement, especially with behaviors 
(Gavin)

       This should be deprecated in 1.1, will be removed in 2.0
          Only kill features in major releases (Rikk)

       Authoring systems must pay attention to deprecated things

       Decided: LOD, Switch will become separators in 1.1
          Group, Transform Separator will be deprecated

       Should we add renderculling into switch and lod?

       Can no longer switch materials, except by duplicating	
 pieces of geometry.
		
Use DEF/USE to fix this, duplicate geometries.

           Mitra has problems with this, since he wants to be able to	
     turn several properties on/off separately.

           People argue it's easier for the viewer, more difficult for 
           the authorer

           Mitra thinks that it's not too hard to implement switch 
nodes	
     that are flagged in the browser.

           He also wants a more elegant specification

           Tom suggested unrolling the state, but we decided just to 
drop	
     the idea of Group Switch statements
              We should review this with the browser group.

Decided: We should change the defaults for shapehints to 
counterclockwise solid to make rendering faster.

Brian's issues:
    add nearDistance, farDistance fields, 
     decided:    defaults are 0.01, 5

    Decided: We need to document how the camera model works better.

    Should we add aspect ratio to the screen?    Decided: leave it 
out.
    orthographic should keep it in, it's completely specified

    Rendering hints:  multimode rendering with wireframe/etc
          Use an indexed line set.
          We shouldn't be specifying rendering models in general,	
    there are better ways to deal(Rikk)

    navigation:  how do we specify what types of interaction are 
possible.
        Lots of different ways to look at this
        Most people aren't implementing the examiner viewer.
       Rikk wants to see much more info on how to deal with 
navigation, 
       more general methods
       Brian's proposals may be minimal.
       Probably want following paths, more complex interfaces in 2.0
       Go through the properties...  Useful?
       We may want to have some of the navigation modes 
         bound to cameras (attribute) 
          constraintsTranslation   -- constrain Y translation
          constraintsRotation  -- need to turn roll (Z) off, tilt off
       Jan wants a more general model.
       Gavin thinks this is a UI design problem, not our problem.
       Is it a browser/author issue?
      Creators want to be able to constrain lots of things.
      Worlds bounding box?  Can you get lost?  Path walking down?
         Specify up/down using bouding box?
         Brian doesn't want to go any farther.
         This is coarser-grained than collision detection
         Jon suggests it's not a client issue, it's a creator issue
         Why not just codify the enums for WALK/FLY/etc
       No consensus, except on the navigation enums.
       speed should be a velocity based on the focal distance, or just 
specify a speed in m/s
       rough consensus on default camera starting speed be in m/s
     We should call the node NavigationHints
           speedTranslation m/s
     NavigationHints {
        translationSpeed    IMPLIED    # SFFloat   (-1 implies that 
there is no deafult)
        navigationMode      DEFAULT|WALK|FLY|EXAMINE  #SFEnum
      }
      Get rid of everything else in the proposal
       Mitra suggests that default Up is bad since we can have 
       difficult models to WWWInline.
       Add documentation to the spec on how to rotate weird worlds.
       To think about:   constrained manipulation
                                   interaction speeds
       Table issues of constrained paths, until we discuss paths.

  Add light attenuation to VRML?
      CDK has it
      this isn't really a rendering specification, does it improve the 
user experience?
      people aren't complaining (Rikk)
      Leave it out
   Ambient light?  is it part of the environment?
      Jan thinks it's really important.  Let's just stick it into the 
environment node

   Add additional geometry?
      Rikk thinks that cylinders are irrellevant
      Brian wants to generalize cylinders slightly.
      We shouldn't have a tessellation property
      Gavin thinks that if we should add geometry, we should add sets.
         there aren't enough cylinders in the world.
         maybe havea a surface of revolution, or extrusion.
      Jan likes  adding a bit to cylinders
      are there lots of NURBS, etc?  
      Rikk doesn't want to make it easier for modeling, just for 
rendering.
      Brian -- what is the minimal set of things to make complex 
shapes?
      Mitra suggests that dealing with additional primitives makes 
editing hard.
      Mark suggests that many people will be modeling with basic 
primitives.
         How much do you build in/  provide in libraries.
      Rikk -- people doing real-time graphics only support triangles
      Jan -- should we just add apex radius to cylinder?  
      We should be really careful not to bloat the browser.  (Rikk)
      If we open the door to adding a new field to the existing 
cylinder, then
      we also open the door on triangle strips and elevation meshes.
      elevation meshes, e.g., can load three time faster than the 
corresponding 
       IndexedFaceSet.

Tuesday Afternoon 22 August 1995

More debate on shapes:
    TriangleStripSet	
Gavin sez TriStripSet can be generated from IndexedFaceSet	
Compression is not significant

    Elevation Grid	
Yan's Draft:		
ElevationGrid {			
nx	SFLong       default = 0			
ny	SFLong       default = 0			
step	SFCoord2   default = 1,1			
heights   MFFloat      default =[]		
}
	
Mitra suggested that maybe we need to add AlternateRep (see 
Inventor)	
to allow experimentation with new geometry types.

New Nodes
       Environment node?	
Lots of discussion on whether FOG should stay.	
General consensus to keep Environment as in OI

       Background color?  Gavin:  how about a World node that 
       contains GLOBAL info describing the entire world.	
 Only one of these makes sense per world....?...(some debate)

       Rikk suggested developing a new node: Sky.	
 This node provides a simple model
       for describing sky and ground plane.  e.g.
	
Sky {		
groundColor	rgb		
bottomSkyColor	rgb		
topSkyColor	rgb		
skyTexture	filename	
}
	
No consensus on how to do sky or bkg texture.

DECISION:  Document Info node  DEF BackgroundColor Info { string "r g 
b" }.
                     Note that background texture is coming in future.

ACTION: Mitra, Rikk, and Gavin will research and try to propose a Sky 
or Background node.

DECISION: No LightModel - use emissiveColor in Material as the 
mechanism.
DECISION:  Do not add BaseColor node - Material/diffuseColor covers 
it.
DECISION: add nearDistance and farDistance to Cameras
DECISION: do not add a transparency field to Texture2 - 
use alpha channel in JPEG, et al
DECISION: adding POINTORIENTATION to the map field of WWWAnchor is	
a small part of what may really be needed - 
      a more general mechanism is preferred.
DECISION: do not insist on -1 as last value in IndexedFaceSet faces 
(as per Bernie's list
DECISION: do not require [ ] for all MF fields (as per Bernie's list
DECISION: do not delimit  ALL field values (as per Bernie's list)
DECISION: Document standard key word naming practice:
	
fieldNameSyntax	
ClassNameSyntax

DECISION:  Propose/Adopt OpenGL extension naming methodology for 
custom extensions - Mitra		
		
e.g.  BackgroundColorEXT - common to more than 1 vendor		
        BackgroundColorFOOX - one vendor's suggested 
extension			
[where FOO = company name/acronym]
DECISION: no to delimit children
DECISION: node names with "" will not be supported.

PROPOSAL:  Rikk and Gavin proposed:
	
1. Restricted property inheritance:		
Separator {			
renderCulling AUTO			
properties [				
DEF mtl_name Material { ... }				
Transform { ... }				
... ]			
children ( Separator or Shapes )		
}
	
2. VertexProperty node - can be done as a cache in the 
implementation
	
DECISION:  Gavin, Rikk, and Tony will review and propose.

DECISION: bump the header to 1.1
DECISION: Lights/cameras will remain as properties  (Bernie's list)

PROPOSAL: Gavin addresses i18n issues	
- alternate header ==>   #VRML 1.1 UTF-8		
- 2 goals with i18n:		
1. support i18 content		
2. make content multi-lingual	
- Tom Meyer - create a LanguageSwitch node	
- GOAL: support multi-lang content	
   (1 file that supports many langs) WOULD BE NICE!	
- GOAL: support non-English lang content 
         (1 file supports one lang)  MINIMUM!
	
==> Jan will research and come back with a 
proposal/recommendation

ISSUE: Fonts - no change.

============== NEW STUFF PROPOSALS for 1.1 =================

PROPOSAL: Brian Blau animation
	
QUESTION: is there a 1:1 mapping from axis/angle -> quaternion?   
Gavin!
	
Morphing between objects;	
Gavin:  separate features - transformation animation is most 
important	
DECISION: Brian and Gavin will breakout and come back w/ 
proposal

PROPOSAL:  Mitra "VRML Behaviors - an API":  See paper decribing 
Mitra's` proposal.
	
ISSUE: Do we bind the PICK, PROXIMITY-TEST, and TIME sensing ?:
		
1. Add this behavior to Separator - default is no behavior			
Separator {				
pickEvent         --->				
proximityEvent --->				
timeEvent   -------->				
...			
}		
2. Create new nodes PickSensor and RegionSensor (and 
TimeSensor)

Tuesday Evening 22 August 1995 in Mark's Room

URNs 

3 levels to the URN implementation
	
Make objects independent of the storage location	
Nice to refer to objects by name - "Coke can", etc.	
"Blue Ball" would be Blueball.gif

Level 1:
	
Extend the name field of WWWInline to be MFString
	
WWWInline {		
name	"URN:foo.com/coke can"			# URN			
"http://thingy.org/fallbackcokecan.wrl"	# URL 
fallback			
"http://genericobject.wrl"			# Generic 
representation	
}
	
Must define a strategy for fallback in the local cache.

Level 2:
	
Throw the URN at an HTTP server (/cgi-bin)	
When it can it will reply	

Level 3:
	
Plug into the global URN infrastructre to the Internet.	
Just a change to the browser software

Rikk doesn't know what URNs are good for.  We try to explain.
"Either you guys have lots of vision - or you're wrong!"

Gavin wants us to keep the language about Generic objects in the 
specification.

Mitra & Mark will draft a document on how to create the cache 
reference files.

===== The Behavior model that happened at Dinner =====

Specifiying the interface to a scene graph - which fields are mutable.
Implementations that don't want to maintain the scene graph know what 
parts to maintain.
Separator node which maintains its interfaces; list of fields and name 
to map the fields to.

Tony thinks that people will be using this construct a lot in heavily 
interactive applications.
We have to worry about the interface.

The interface has a definition name.
Mitra, Tom, Gavin and Tony will be drawing up an interface 
specification for us all.
We need to make sure that the prototyping issues are resolved by this.

====== Back to Behaviors?  Postposned to tomorrow morning =========

======== Naming =======

The toaster color is one of our fields.  
It is connected to a field in an object.  toaster.color is a field.
From outside the world it looks like "toaster.toastercolor"
Build up layers from objects that have interfaces.
How do we open the window in the kitchen?
Access kitchen.toaster.popup
We will want private and public interfaces? asks Mark.
Don't publish the location of the private interfaces.

Tony feels like we're on the top of a slippery slope.
He thinks that Tom has solved a lot of these problems.  Access 
methodology.
For complex behaviors we can plug in APIs.

What about non-unique names?
We have to make sure that we scope names by interface.
If we use WWWInline then we use the name from within the 
WWWInline to prepend to all objects within that file.
We should depricate the multiple use of names because 
it will hurt behaviors.  DON'T DO IT.
Rikk wants to make sure that we do leave it in 
because someone what may use it intelligently.

=====  Classes, Prototypes, DEF/USE, and COPY =====

COPY
DEF/USE won't work if you start manipulating the fields within it.
COPY would be a "deep copy".
You have to give the object a new name.  
DEF foo COPY car creates an object named foo which is a copy of car.


Classes/Prototypes
The interface thing defines prototypes.
This may be ok, we'll reference it from as above to find out for sure.

"That Mitra - he sure does have a lot of ideas."

=============  FORMS ===========
Combination of button and text fields.
Mark says that text entry needs to have useful UI widgets.

==== CLIENT PULL/SERVER PUSH ====

All agreed it's a good idea.  Tom and Mark think it's quite wonderful.
What happens when you get another node with the same name.
It's a useful behavior.  Action for Tom & Mark.


=====  VARIABLES and CONSTANTS =====

Define a node which has a single field for each of our variable types.
This can be done other ways right now.

===== Agenda for Wednesday ======

Connect and Trigger nodes and Variables
API (2.0)
Camera Paths
Streaming/Multiuser/SRDP

Collision Detection

All issues must be resolved by lunchtime (11:30)

Go through action lists
Revisit the schedule
Draft a statement.
Alert the media.
Group Photo.

Wednesday Morning 23 August 1995

============= Connect and Trigger nodes and Variables============= 

How do we detect errors in VRML files when there are so many 
possible "undefined" results?

We have to define the order in which the triggers will happen, 
otherwise the browser will have a lot of checking for conflicts.

Tom: Any connection between a sensor and a trigger must be regarded 
as an atomic event; this would allow multi-threaded execution as well.  
In this way, we can guarantee that only one trigger will happen 
at a given time.

Mitra: Sensors sending on edges, rather than continuously, 
is what can cause these conflicts.  You can fan the
connections using an AND or an OR, 
but you can't simply run the wires together.

Gavin: Do we need triggers at all?  Triggers are a derivative 
of a connection that allow us to change the state
of a field.  They allow us to flip the light on for say 5 seconds 
after a pick sensor as opposed to simply
turning the light on whenever the user is in the room.

Rikk: We can't prove that the code will always work, 
what we need to prove is that it will work most of the
time in common practice.

Brian: Is there a way to further restrict state changes 
when triggers come over the net in a multi-user situation?

Do we want to support strict synchronization between user experiences, 
so that the same experience happen at the same time for multiple users 
- 
in the same GLOBAL time?  If two musicians are playing together, the
latency over the net

Can we limit the model to keep it strictly deterministic?  
An atomic lock structure on a trigger?  You only lock
things on the local machine...

We have to know who owns the clock, who owns each object, etc...

Are we opening the possibility for problems in DIS later 
by desining triggers now without dealing with DIS.

We could set up an arbitrator that makes consistent 
arbitrary decisions on the basis of incomplete information.

Leaving this up to the browser guarantees that different results 
will occur on different systems.

The skill of the author in dealing with conflicts is critical.  
There is a way to build deterministic worlds,
DIS does it every day.  Perhaps this problem is too difficult 
to solve for this group today, and we need to set
up a working group that might inluded some DIS folk 
who know the problem intimately.  We will.   Perhaps
we should also involve internet service providers, such as Mpath, 
who guarantee low-latency synchronization between multiple users.  
Perhaps DIVE.

What can we decide on today?  If we don't allow fanning of connectors, 
then we can assure that triggerswill be deterministic.   

We can allow two users to turn off the same light at the same time - 
what they don'tknow won't hurt them.  But if one turns the light down, 
and the other up, the light will behave oddly.  If you allow these 
changes 
in an API rather than in VRML, then the external code can be awtching 
the light in the simulation all the time.

Rikk is more concerned about paging while running a JAVA interpreter 
on the outside of VRML.

Gavin claims that it doesn't matter if this runs inside or outside of 
VRML:

There's an event that starts some stuff happening.  
There must be determinism in the stuff that is happening.
As soon as an external API writes back to create a new event, 
then another script to begin.  if all you do with the trigger is to 
create an event, then it's simple to implement but it's not nearly 
as powerful as if you allow the trigger to change the value of a 
field.  
Otherwise, you'll have to excute lines of, say JAVA, just to 
change the value of the field, rather than simply hitting one VRML 
node.

Will our solution be considered flawed if it only breaks under 
multi-user conditions?  Perhaps we can support connections and 
triggers 
in a single user environment.   Rikk: If you don't analyze the multi-
user now,
you must admit that you're opening up the possibility for major 
revisions 
later.  Let's agree to put triggers and connections in for single user 
conditions, and state explicitly that the model may change for the 
multi-user 
conditions.  The dark ages of VRML will ensue for a year, during which 
multi-user experimentation will go on.  Perhaps we should do a sample 
API 
engine that browsers can write to.  Plan for a year of anarchy and 
enjoy it.


============= API (2.0)============= 

The API is an interface to the scene graph, the network, or the 
browser.  
If it's to the browser, then it can tell the browser to move the 
camera, etc.  If it's to the scene graph, then it can edit it directly 
- much more power.  

Mitra: How an API should work
In VRML API proposals you have  a code node, like this:
Code {
  interpreter
     Language=java
     Program={begin...  end} or,
                     {http://url.to.some.java.code}
}

To execute an external piece of code, you must	
(1) locate the interpreter,	
(2) load the program,	
(3) run the program.  

Browsers must provide the following APIs:	
AddNode		
DeleteNode	
SetField	
GetField	
SendMessage	
RegisterForEvent

Callbacks for various registered events will be made from the browser.	
(various events)

Mark:  Another model -- load an application that controls your 
browser.
Browser has UI and Scene, application can talk to and completely 
control browser.  

These models are substantially the same, the only difference being 
that 
in Mitra's model, the location of the program (or even the program 
itself) 
is contained in the VRML file, and in Mark's the application is 
external to, 
and drives the renderer. Tom: Define a new object that act like 
triggers 
that can invoke external actions. 

 In all these models, the complexity of the browser is approximately 
the same.  The browser still must provide an API that allows some 
external program to interact with it.   Both models allow reasonable 
platform independence

Tom's model: 
Code {
  code
  inputs
  returns
}  
This is functionally identical to an inventor engine.  
The code goes off and executes 'till it returns a value, 
which can then be examined by connector and trigger nodes.  
This model gives protection from side effects, but makes it much 
harder 
to create complex behaviors. 

How do you bind the API to the language interpreters?  Suggestions -- 
sockets, pipes, RPC, a general message api, C++ objects.

To make this work, you have 4 pieces of code:

WebSpace
Library (1 per platform, which implements C calls)
Glue Layer between library and script interpreter
Script Interpreter	

Details to be filled in:  
How do you access browser objects?   
What specific APIs are needed?  
Do we need to allow an API like Compile VRML?  
Are calls to external routines synchronous or ansynchronous?

============ Collision Detection============
Author must be able to specify whether or not the user can collide 
with an object.  The default for objects is not to detect collisions.  

How WebFX does it:

A node that indicates whether or not collision detection is globally 
enabled.
DEF CollisionDetection Info
   string TRUE

A node that indicates whether or not collision has occured
CollisionDetection {
  fields[SFBool collision]
  collision TRUE
}

Do we need surrogate objects (bounding boxes or spheres) 
for collision detection?   A surrogate can be connected to an object, 
so that if  the viewer collides with the surrogate, it returns (how?) 
the name  of the desired object.

Mark:  The purpose of collision detection is to produce a rich user 
experience.   Mitra: A scenario for rich user experience -- I walk 
into a wall and (1) I hit the wall and can move no farther in that 
direction (2) I hit the wall and bounce off (3) I hit the wall and the 
wall falls over.

Objects that can be collided with
The region that you collide with
The point of collision on the object
The action taken on collision

What happens on collision:	
(1.1) you don't go through the object	
(2.0) a behavior.

Do we want arbitrary geometry or bounding boxes to detect collisions?  
Do we add a bounding box to separator nodes?  
Brian:  Can we have collisions with moving objects?  

Proposal -  Collision detection separator: Collision detection is a 
separator, which has children, which are drawn.  It has one field, an 
SFNode, which points to a node that contains the geometry for 
collision testing.   
CDSeparator { 
    Surrogate	Cube {...}	
#SFNode
    ... children ...
}

... or, we can add the fields to the separator...

Separator {
  SFNode collideWith
  SFBool collideOn: AUTO | ON | OFF  
  kids
}

This causes problems because collideOn must be set ON through 
an object's entire hierarchy for collisions to be properly detected.  
Alternately, we could make collision detection a property node so that 
its setting would be inherited by all children.

Tony will clarify inheritance of materials through WWWInlines.  

What is the shape of  "you"?    
For realistic collision, the viewer should have size and shape.  
WebSpace models the viewer as a unit sphere, 
other browsers can more accurately model the viewer's avatar.  

Possibly avatar dimensions or bounding geometry should be placed 
in the environment node.

Mitra:  How can we handle the  90% case -- for 50% , 
the author specifies nothing, so we use the near clipping plane 
to determine the viewer's collision boundary.  
To get the other 40%, we can allow the author to specify avatar size 
or bounding geometry.   The other 10% (Alice in Wonderland) is harder.  

Rikk: Brief digression into terrain following:  
Avatar has size, shape centroid, direction.  
Ramp up -- if your direction vector is above the step, 
then you can go up over it.  
(Ramp down is more difficult -- it seems to require gravity) 

Is avatar size part of Navigation or World attributes?  
Environment node -- one per separator, 
Navigation node -- there's one and the browser will take the first one 
it encounters.   
it looks like it's going in the Navigation node.   
Avatar shape will be represented by an SFNode in Navigation, 
which will act as a hint to the browser.

============= Camera Paths============= 
Very similar to animations.  
You just start an animation that affects the camera 
and therefore the viewpoint.  
Intermediate positions are determined via the normal animation 
interpolation methods.   There's a camera and a target.  
Can the camera be attached to a moving object?  
Can light positions be animated?   
General idea:  we want animation to be general enough to work 
on any object in the system -- not a specific hack for camera paths.  
WebSpace has camera paths, which are implemented through their 
general animation system, today.

============= Streaming/Multiuser/SRDP============= 

NOT COVERED.

=====  Afternoon Review =====

Action Items:

0) Vision statement - Mark - end of next week

1) Rikk - 4 Channel JPEG?  No answer yet. - Monday
2) Formula for mapping Inventor specular model to low-end renderers - 
Tony & Rikk - 27 September
3) Agreed on the Emissive Model - have to clearly describe it within 
1.0 spec - Rikk - 27 September
4) Incorporate Brian's proofreading comments into the 1.0 spec - Gavin 
- 27 September
5) Mitra will find out what to do about the major MIME type - next 
week ASAP
6) Clarify the 1.0 spec - Gavin, Jan, Tony - 27 September
7) Add everyone to VRML products and create vag mailing list - Mitra - 
ASAP
8) Write up the Environment node for review - Gavin
9) List of currently used extensions will be posted - Mitra
10) Put forth the naming convention for extender nodes - Mitra
11) Write up the "Interface" proposal, Object model - Mitra, Tom, 
Tony, Gavin
12) Internationalization WG - Jan, Tom, Mitra, Jon
13) Animation/Multimedia/Syncronization WG - Brian, Gavin, Tom
14) URNs/Caching WG - Mitra, Mark
15) Push/Pull - Mark, Tom
16) Forms/Text input - Rikk, Gavin
17) Audio WG - Bill, Jeff Close (Segregation of sound, rooms is a 
research topic)
	
1 September posting for public scrutiny	
8 SeptembercClose of comment, 1 week of spec review	
15 September spec release

18) Generalized Cylinder - Gavin, Jan, Brian
19) Indy for VRML.ORG - Rikk, Mark
20) Management of a 1.1 site - Mark
21) Separator node design, collision - Gavin, Rikk, Tony
22) Connectors and Triggers - Mitra, Brian, Mark, (Gavin to send his 
design)
23) Networked API - Mita, Jon, Mark
24) Navigation node - Brian
25) Elevation Grid - Gavin
26) Small changes to graphics specifications - Tony, Gavin

What is the meaning of the "X" column between the 1.1 and the 2.0 
coumn.  
The items in 1.1 are:	
a) very important and a significant improvement, and	
b) will hold up release of 1.1.  

The items in the X column will move into 1.1 if we agree on them.  
Otherwise, they remain experimental extentions until
they can be agreed upon for inclusion in 2.0

1.1 Column

environment
naming conv.
UTF-8
URN
push/pull
elv. grid
collision
separator
navigator
small changes
kill group

1.x Column

sound
interface
international
animation
media
forms/text
cylinder (gen)
sensor/trigger
network API
hierarchical naming

2.0 Column

behaviors
multiuser
persistent

============== TASKS BEFORE THE WHOLE GROUP ===================

September 15th release of tight easy spec (1.1) ,
and outline of hard (1.x) stuff.

Tamales Bay in Marin - the Marconi conference center for the next 
meeting

(Specifics of posting made to www-vrml on 29 August 1995)
Where we met, when and whom.
Goals to find minimum set of successful additions to VRML
Formal name as VAG
Home of VRML information on the Web
Talk about the plan in general terms with respect to schedules.  
Looking for comments from industry and academia.
Mission and goals; principles.
People know what this group is doing.
Reviewed published proposals for VRML.

Comments 

Tony - Thinks things went well considering 10 geeks in the room.
Jan - Thinks we could have missed potholes but isn't sure.  We should 
continue with no process at least until the next meeting but at some 
time we need to become more formal; when we slow down, anyway.
Gavin - Yeah, it was cool.
Rikk - I think it's going pretty good.  No other comment.
Jon - I think this was great, for a meeting as long as it was, as 
content-packed, it was surprising well, order was generally 
maintained.  Wouldn't risk changing anything.
Mitra - Wishes we stuck to the agenda more precisely

Notes posted by Mark D. Pesce
mpesce@netcom.com

Back to VRML Architecture Group