#cwebber2rhiaro: well in the recent discussion we talked about UPDATE updating only parts, except that adding/removing values to sets
#cwebber2aaronpk: I'm kind of on the same page with that but not 100% sold
#cwebber2... it does let me with the bookmark thing it can just add a photo
#cwebber2rhiaro: and what happens if you just add the photo
#cwebber2sandro: if you just use RDF instead of json it wouldn't have this problem
#cwebber2rhiaro: so it's not like there's mysterious stuff this client doesn't know about
#cwebber2tantek: I feel like there's a real world oauth type assumption here we're not taking into account
#cwebber2... which is that clients like on Flickr allow people to tag other peoples' stuff but not add/delete tags
#cwebber2sandro: in this particular case it doesn't actually conflict because it's up to the server
#cwebber2aaronpk: but it makes it more complicated on the server... tantek's on the right track in saying that if we add the ability to *just add* something
#rhiaro... In both of these examples of the same update operation as form-encoded and json, the data structure if the request is the same, so you can convert between them
#rhiaro... However there's no actual functional benefit to form-encoded in this case
#rhiaro... Whereas there is a benefit to form encoded for creating, cos it's simpler. But for updates it's not simpler
#rhiaro... The proposal is for doing updates you must use json
#rhiaro... And updates are not supported in form-encoded
#rhiaro... It means that clients and servers don't have to handle both formats so in theory it should be easier to both because there's fewer cases to handle
#rhiaro... And there isn't a large benefit to using form-encoded for this anyway
#rhiaro... whereas if the spec allows the client to send either, the server has to support both, and will probably end up just mapping one to the other
#rhiaroeprodrom: There is an arguement from a consistency point of view.. i've been using form-encoded for creation why should I switch to json for update? That said, if real implementors are not saying this it makes sense to me that if there are two ways to do something and everyone is doing it one way, it's not necessary to support both ways
#rhiaroaaronpk: I think it's also important to note that in a lot of cases a client might only create post and never intend on updating posts
#rhiaro... In which case it can still just create with form encoded
#rhiaro... So there is no burden to switch formats
#rhiaro... And if the client is planning o n updating posts it can use json all the way through
#rhiaro... So for me, cleans things up to only support json for updates on the server
#rhiaro... I'll go through the examples and remove all the form encoded versions
#rhiaro... Publishing clients is a different class of client than editing clients
#rhiaro... Publishing clients MUST support sending form-encoded requests and they may only publish, and never edit a posts. Whereas editing clients are going to support the full list of operations on posts, so they can just use json all the way through
#rhiaro... I feel like it separates those conformance classes better
#rhiaroaaronpk: resolution is that editing clients don't need to support form-encoded, only json
#rhiaroaaronpk: My current thinking with the media endpoint is that based on what I've seen with... github issues - if you drag a photo into an issue it uploads it right away and puts a url into the markdown. In these cases it seems like the url to the image is permanent, it is meant to be the actual location of the photo
#rhiaro... The reason the spec should specificy it is if we want to be able to have someone create a media endoint service that clients and servers can expect to work a certain way
#rhiaro... You can implement your own in your micropub endpoint,t hen it becomes an implementation detail. But if we want to support stand-alone media endpoints then clients and servers need to know how it will work
#rhiarosandro: isn't it only servers that need to?
ben_thatmustbeme joined the channel
#rhiaroaaronpk: I guess it is only the server that needs to know whether it should copy the photo or not
#rhiarosandro: sounds like a possibility for future standaridsation
#rhiaroaaronpk: So should the spec mention it? Or not?
#rhiarocwebber2: if you post something and that cycle never finishes in mediagoblin, it gets garbage collected eventually
#rhiaro... But if you don't end up implementing it it doesn't affect the standard
#rhiarosandro: clients must do media endpoint discovery? they can't jus tpost it to the mmicropub endpoint? THe discovery thing concerns me. Seems like a whole complication
#rhiaroaaronpk: its' a different issue. I like that for clients that only want to create posts they can just post a photo to the micropub endpoint. That's still in there in the form-encoded creating
#rhiaro... One of the reasons for using a media endpoint at all was for user experience when you're putting multiple photos in a blog post. Also if you want to create a post with the json syntax you have to do two different posts
#rhiaroaaronpk: the url returned is not expected to be the actual jpeg url, it's supposed to be the url of the post
#rhiaro... The thing being created is not the jpeg, it's the post with all the data
#rhiarosandro: you could have that be a header on the post
#rhiaro... If you're posting certain media types you could get certain behaviour
#rhiaroaaronpk: I think it moves the complication to a different part of the process
KevinMarks joined the channel
#rhiaro... The way it's written right now, the complication is discovering the media endpoint
#rhiaro... Otherwise it's does the endpoint expect what kind of data
#rhiaro... Chris, you said media goblin does do the periodic cleanup of media never used in a post?
#rhiarocwebber2: I remember tsyesika and I talking about it at some point
#rhiaroaaronpk: how does it know if a file is used in a post?
#rhiarocwebber2: Media is specifically associated with a post in mediagoblin's case. You upload it and it ends up going through a step where it gets transformed by the processing to generate multiple resolutions of the file etc, and also associates that...
#rhiaro... It's a pattern that we are seeing driven by better UX
#rhiarosandro: in that case we want not just multipart form for upload, but you want to use javascript to send in a recoverable way
#rhiaroaaronpk: partial uploads are a different story
#rhiarosandro: in that case you want a different protocol. ideally rsync over websockets to the server..
#rhiaroaaronpk: I think it's useful without going that deep into it
#rhiaro... Even when the upload either succeeds or fails it still provides a better experience cos when it does succeed it's great. doesn't have to support partial upload to provide a better experience
#rhiaro... The is this pattern that we're seeing implemented by lots of services, so it's useful to caputre that in the spec and encourage implementors to also follow that pattern
#sandrotantek: There's evidence there's been work on convergence
#sandrotantek: The group is aware there's different vocab approaches at work. And has converged them in some places. But in our parallel approaches work mode, we don't see this as a blocking issue.
#sandro.. if there's an implementor that comes to the table and finds this a blocking issue, we'd want to know.
#sandrotantek: It's an antipattern, where the spec doesn't say enough to make things interop
eprodrom_ joined the channel
#sandrocwebber2: maybe over the next few months it might be a fun experiement to see how far you can get crossing the vocabs and the pub protocols, but it might get us into trouble
#sandrorhiaro: I use aaron's mp clients and on the server I do a little rearrangement and treat it as AS2
#sandroaaronpk: I have a plan for the validators (eg test suite)
#sandrocwebber2: I could maybe have AP reading for CR in a month
#sandroaaronpk: I'm concerned about withing for AP when there are unknows for AP
#sandrocwebber2: I think AP (without ASub) I could do it within a month or a month and a half....
#sandroeprodrom: I only bring this up because we've talked about this before.
#sandrosandro: How about instead we just have each draft in a big box point to the other spec, "THis is one of two SOCIAL APIS from the socwg, with sltihgly different use cases and approahced, implementors should check ou tht eother one"
#aaronpk"This is one of two client APIs being produced by the working group with slightly different use cases and approaches. implementers should check out and review the other approach here."
#sandrotantek: That would greatly help communicate this to the outside world, yes.
#sandrotantek: It helps show that clearly these are clearly from the same group
#tantekeprodrom: if we have any additional work to move forward, and we know what actions we need to do to move forward, or ok if have no actions to do
#tantekaaronpk: main issue is regarding verifying behind paywalls
#tantekaaronpk: if you have a document like a PDF that is restricted, then create a separate page that the document references, so that there's an actual page with the document's metadata
#tantekaaronpk: there are a lot of benefits to that
#tantekeprodrom: I think the resolution you proposed makes sense
#tanteksandro: or maybe you're revealing private information
#tantekaaronpk: this is specifically about you need to pay to get access to this journal
#tanteksandro: he provides some text, which I think is unnecessary
#tanteksandro: if there is already a trusted relationship, then there's no need
#sandro(where "sender" in his proposal should be read as "owner of the source")
#tantekaaronpk: I think what he was getting at is not actually going to work because webmention is a thin payload
#tantekaaronpk: or I could add something with the suggestion, if you have restricted / paid access content, you should create a landing page for that content that is public that has the links
#tantektantek: issue opener asks for that in his last comment
#tantektantek: I think the intent of this requirement was that the receiver at the target's domain knows that the target is a valid resource, like the page / redirect actually exists
#tanteksandro: maybe I want to accept webmentions for all pages, 404s, and use that to learn of bad links and create redirects
#tantektantek: if we are making it possible for any target to be a valid resource then what is the point of this conformance requirement
#tantekaaronpk: the point of this sentence is that receivers should not accept just all webmentions
#tantekaaronpk: another example is perhaps a paid proxy that receives webmentions on behalf of others, and if someone's account expires, then the proxy would stop accepting webmentions on behalf of the target
#tanteksandro: maybe expand on the "valid resource"
#tantekaaronpk: I think that's a good way to handle this
#sandroincluding , :for example some servers (wm.io) might accept anything, while other endpoints only accept one particular target URL
#tantekaaronpk: so I will add a "for example" informative text, clarifying the original meaning of that sentence
#sandrosandro: This is an editorial change, trying to better express the editor's intent and WG's understanding
#tantekaaronpk: more ways to discover = less interop
#tantekaaronpk: the cost being potentially fewer documents that can use it
#tantekaaronpk: I think we're fine for the current level of things being published
#tantekaaronpk: and adding this clarification text is fine
#tantekaaronpk: totally up for adding the explicit: non-HTML documents must advertise using the HTTP LINK header
#sandroPROPOSED: Close webmention #40 with editorial revision clarifying that one should only look for HTML tag if content is HTML. Non-HTML resources MUST use the HTTP Link header for discovery. Each additional discovery mechanism imposes a cost on every sender, which we want to avoid.
#tantekaaronpk: also helps show that the spec has thought things through
#tantektantek: in the rare instance we see what eprodrom is talking about, that can be handled by a spec revision
#sandroRESOLVED: Close webmention #40 with editorial revision clarifying that one should only look for HTML tag if content is HTML. Non-HTML resources MUST use the HTTP Link header for discovery. Each additional discovery mechanism imposes a cost on every sender, which we want to avoid.
#tanteksandro: dropping the connection after a 1MB and then 100MB is still in the pipe? or a range request
#tanteksandro: not sure how many support range requests
#tantekaaronpk: if you do end up downloading, you can only parse first 1MB
#tanteksandro: ok with may, some techniques include, setting right media types on your ACCEPT header, aggressively closing the connection if its a media type you don't know what to do with
#tantektantek: because you call out specific content types it would be good to note how that works here
jet joined the channel
#eprodromPROPOSED: Add text to security considerations for Webmention to suggest using HEAD request during verification, AND add text to Verification section to suggest using Accept header
#eprodromPROPOSED: Add text to security considerations for Webmention to suggest using HEAD request during verification, AND add text to Verification section to suggest using Accept header closing issue #46
#sandronot "suggest using HEAD" but "clarified that it is allowed to use HEAD"
#eprodromPROPOSED: Add text to security considerations for Webmention to clarify that it allowed to use HEAD request during verification, AND add text to Verification section to suggest using Accept header closing issue #46
#eprodromRESOLVED: Add text to security considerations for Webmention to clarify that it allowed to use HEAD request during verification, AND add text to Verification section to suggest using Accept header closing issue #46
#tantekeprodrom: that resolves the issues that we have
#tantekeprodrom: let's take a 5 min break and finish with AS2 before noon
#rhiaro... Linking to implentationr eports, template, linking to test suite, submission process, change links to repo, adding a note about dropping features
#rhiaro... Things that don't get implemented will be dropped
#rhiaro... Update a couple of references, eg. CURIE
#rhiaro... We have a list of implementations of AS1, they're clearly good targets for discussing AS2
#rhiaro... Next steps there will be contacting the companies on that list, letting them know we're moving to CR and we'd like to get their implementation reports
#rhiaro... Which will not only stimulate getting reports, but also implementors
#rhiaro... After that I'm not sure what else we need to do
#rhiaro... Is there additional work that needs to go into AS2?
#rhiarotantek: I'm specifically looking to see what percentage of AS1 implementations (that are current - there are old ones that nobody has touched for years, dont expect those) to adopt AS2
#rhiaroeprodrom: And should inform.. also means any activitypub implementations are by definition AS2 implementations
#rhiarosandro: just looking at the transition request for it, in reverse order: we should link to the implementations so far, which would at least be the empty implementation report repo
#rhiaro... But if we know of some already, even withotu reports, would be good to enumerate them and show something going on
#rhiaro... For wide review, I don't know about wide review for AS2. There's tons of github issues. Have we sent emails or announcements we can point to?
#rhiaroeprodrom: Good idea to send emails out to old AS lists
#rhiaroaaronpk: this may not be related, but when we're trying to get people to implement AS2, what is the incentive for people who are not memers to implement the draft before it's an actual rec?
#rhiarosandro: so if they come across a problem there's still time to fix it
#rhiaro... It's unlikely to change, but ifit's going to change.. i tey're oing to hit a fatal problem with it it's better to know that before it's to late to change it
#rhiaroeprodrom: there are companies like getstream.io, activitystreams is their business
#rhiaro... They may want to have that as.. 'we are the first implementors of AS2'
#rhiarosandro: also w3c can do some press around recs, testimonials, quotes from early adopters, so chance to get into that press cycle
#sandro"Now it is such a bizarrely improbable coincidence that anything so mindbogglingly useful could evolve purely by chance that some thinkers have chosen to see it as a final and clinching proof of the non-existence of God. "
bblfish joined the channel
#sandrobblfish, is your name a reference to HHGTTG? What was your thinking in adopting the name?
#rhiarotantek: you've never seen a publication ready version, here it is
#rhiaro... The only normative change to this since the last version is that more people have started publishing video posts so video got added to the algorithm
#rhiaro... Last year, when we resolved to publish the first time. sandro raised.
#rhiaro... First, we're doing the general how-does-this-fit-in for all the drafts
#rhiaro... it references AS2 and AS2 vocab in informative explanations for, like examples. That's in the document itself, there's no summary that explains document relationship with AS2
#rhiaro... I'll take an action to add something informative for that
#rhiaroeprodrom: I feel like the abstract clearly says ... *reads abstract* ... so you odn't have a post type (check), you want to determine the type of that post (check) -> this is sthe algorithm to do it
#rhiaro... It feels like the motivation is fairly clear
#rhiarocwebber2: one of the major things I was interested in this was, that makes it really useful to the group, especially with having mp and ap moving forward at the same time, is that it provides a bridge between the things we currently have in the group
#rhiaro... you're able to mvoe from something you don't have specific types in a micropub type system, and you can move to a system with types
#rhiaro... That's one of the major questions in this group anyway, how do you justify these two different stacks, it seems like this is helpful
#tantek"Post type discovery helps provide a bridge between systems without explicit post types (e.g. Micropub, jf2) to systems with explicit post types (e.g. ActivityPub, Activity Streams)."
#rhiaroeprodrom: That's more explicit than what's in there now, and says why PTD is important
#rhiarotantek: I'll just keep tha tissue open until I've made the edit
#rhiaroeprodrom: that would close that issue I belive... sandro?
#rhiarorhiaro: the vague language is not good for rec track, would be clearer how it's useful if it specifically used AS2 terms. eg. RSVP post doesn't exist in AS2
#rhiarotantek: we could also have conformance classes like if you are an AS2 generating application you must generate the following objects from the following types
#rhiaro... If you want to open an issue on conformance classes that would help
#rhiaro... If we get more implementors we can point them at this to say if you're consuming untyped data, this is how you get to AS2
#rhiaro... Another possible source for untyped data is RSS
#rhiaro... Various sites that do RSS feeds of their activities that have made stuff up. I can research to see if there's something I can add to post type discovery to make that more explicit
#rhiaroeprodrom: I'll give a quick overview and where it's at
#rhiaro... PuSH was originally developed by bradfitz and bret (?) from Google
#rhiaro... it was a protocol which they published along with an implementation which is the google hub
#rhiaro... Basically a push-based feed system where you can subscribe to feeds and receive fat pings
#rhiaro... THe first version 0.3 had a number of interesting characteristics, one is that it only was defined for atom feeds. Another was that it had a kind of complicated set of roles; a publisher and subscriber, and then a 'hub' so you can set it up so the publisher and subscriber don't have to scale, but the hub does
#rhiaro... At its height, all google feeds were PuSH were enabled: buzz, blogger, feedburner
#rhiaro... It was pretty well implemented at google
#rhiaro... a third part implementation called superfeedr was also enabled for tumblr, wordpress.com, a number of others
#rhiaro... it kind of hit a peak where it was enabled for a lot of rss and atom feeds
KevinMarks joined the channel
#rhiaro... There were a few issues that made having a new version make sense
#rhiaro... When the community and business groups at w3c first started, PuSH was one of the first CGs, the lead was Julian (sp?), the ceo of superfeedr
#rhiaroeprodrom: big changes in 0.4, communication between publisher and hub. Redefined how to do publication and subscription for things that aren't atom feeds
#rhiaro... anything that can have a url can be subscribed to
#rhiarotantek: I'm supporting that on my site, publishing via PuSH 0.4 using superfeedr
#rhiaro... First was that when the open web foundation was first announced, google had announced that they would be putting a number of specs under the open web foundation patent license and so there are blog posts to that effect, but they never actually published the paperwork that says, signed at the bottom, this is under this patent
#rhiaro... By the time that we started to be interested in this, and having it as a w3c spec, the peopel who worked on it were no longer working on it and there did not seem to be as much of an institutional interest in this kind of standardisation around feeds
#rhiaro... Fast forward to now, the superfeedr hub was just acquired by medium
#rhiaro... Now we have some diversity of hubs and implementaiton experience and it seems like everyone's... people are using different hubs and publishing to differnet hubs, and everything seems to work. I don't think we've run into interop problems where your site can only go to one hub because of how it's implemented, or where a reader support consuming PuSH 0.4 and support consuming atom or h-feed real time via PuSH 0.4, seems to work with all of the hubs that
#rhiaro... There's basically been really good implementation incubation and maybe we're all sidestepping the problems in the spec?
#rhiaroaaronpk: the reason they all are working together is that the holes that were left in the spec we have all filled in the same way because of the tutorial on the indiewebcamp wiki
#rhiaro... In a couple of places where the spec doesn't say what to do, I just said 'do this'
#rhiarosandro: I read 0.4 on Sunday and I was like... this is so full of holes
#rhiaroaaronpk: but it's also .. theyr'e not that big, you can fill them
#rhiarosandro: but if you don't fill them you don't have interop
#rhiaroaaronpk: one side, but not all the way through
#rhiaro... Specifically the notifying the hub of new content is not in the spec
#rhiarosandro: intentionally left out. Also what the notifications from the hub are is left out. Gaping hole.
#rhiaroaaronpk: but if you're in an ecosystem where everyone is publishing and expecting the same type of content it works
#rhiarosandro: the press around it is all about fat pings, but indiewebcamp doesnt' use it for fat pings. There's no format defined for what a fat ping would look like
#rhiarotantek: we have specifically chosen to use the thin pings subset of o.4
#rhiarosandro: 0.4 doesn't talk about that. There's nothing in the spec about what you send.
#rhiarotantek: we just send the url of the thing that's been updated?
#rhiaro... And if you go look at the section how to subscribe, it walks you through every part of the request, including receiving notifications, including separate sections for standard and fat pings
#rhiaro... For standard it says will not contain a body
#rhiaro... If you receive an empty notification, treat this as an update to the url
#rhiaroeprodrom: talk at a more political or editorial or work level
#rhiarosandro: the takeaway from this description is that PuSH 0.4 by itself is not useful to us, but refined the way aaron has is useful for some subset
#rhiaroeprodrom: well it is being used, so in that case
#rhiaro... We have two or three options... we take the PuSH 0.4 and take it to soe sort of rec level right now and kind of steward it through that process
#rhiaro... The other is that we take the PuSH 0.4, make an 0.5 that clarifies some of the things that we're doing, but maybe talks about what's specifically being used in the indieweb community
#rhiaro... Third is that we don't do anything with it and accept that it's a community standard but that we don't necessarily have anything to add to it
#rhiarosandro: One more: to change the name... like you said for 0.5 but say 'inspired by'
#rhiaroeprodrom: right, we could do something similar. When you do discovery you could do it for some other name, like not 'hub' it's 'publisher' or something
#rhiarocwebber2: which ones of those are possible within IP if we don't get google to give it up.. how risky is that?
#rhiaroeprodrom: google is a member of w3c, if we decided to publish a new version of this spec, part of tha tprocess would be a call for exclusions, which is they say they have ip considerations that would block publication of this spec
#rhiaro... It does not seem like we could get to a point of being at PR and causing problems with murky ip around this spec
#rhiaro... And the people who are being paid a lot of money to figur eout google's IP will do it instead of you or me
#rhiarotantek: I would say that if we took on PuSH as a work item in this group whether called that or called something else, then if we successfully produced a rec, it would put it in a stronger .. or in a more implementable with less ip concern situation than we have today
#rhiaro... in that there would be at least some degree of w3c participating member committments implied or explicit through that process
#rhiaro... The larger/first issue to resolve before the ip issue is that there was the CG, Julian still felt very strongly about editing and updating the spec, I think that were we to decide to go forward with it specifying the details we have figured out that allow interop woudl be a good thing, and I would not be comfortable having that gated on someone outside of the group
#rhiaro... We have approached Julian in the past explicitly to participate. I think he hasn't had the time, I don't think it was a negative thing
#rhiaroeprodrom: for him and his business, the state of PuSH 0.4 fine, it works for what he needs
#rhiarosandro: both what you said and the name, the right thing to do about the name is to ask the people who feel they have ownership of the old name, to see if they want us to call it PuSH 0.5 or name it a new thing
#rhiarotantek: I woudl word it more strongly - hey we like the work you've done, we've continue trying to specify details, we would like to take that work and publish it with the same name with a new version number
#rhiarosandro: we don't want to hostilly claim next version numbers
#rhiarotantek: I believe brad doesn't care... bret is happy to see anyone build on it... I think netiher one of them want to deal with talking to google's lawyers
#rhiaro... Julian feels the strongest, he produced 0.4. If there's anyone we need good vibes from, make sure he knows and agrees with it happening, it would be Julian
eprodrom_, jasnell_, bengo_ and shepazu_ joined the channel
#rhiaroeprodrom: Another objection... limited time, limited resources. I'm not going to edit this. I do'nt know who is. But we'd need to have someone step up and do it. We only have 7 months
#rhiaroaaronpk: not sure if that works all the way through
#rhiaro... I haven't thought it through yet. Might work, not sure.
#rhiaro... Reason because.. it might depend who is trusting the hub
#rhiarosandro: the hub has to be the one enforcing the access control
#rhiaro... Really doesn't work well to have a third party hub with access control
#rhiaroeprodrom: One technique is to have different feeds by group. Secret feeds or have a token in them that's hard to guess
#rhiaro... THe feed of stuff that evan publishes that's available to sandro might be under a long complicated string
#rhiaro... Shifts that effort onto the subscriber, it's hard to manage
#rhiaro... It's especially hard to deal with combinations of things
#rhiaro... That makes it kind of a tricky.. I wouldn't recommend it for anything that's not public
#rhiarotantek: sounds like what you're saying is if you struck down that path of a PuSH based system you're gonna end up stuck with public-only functionality
#rhiaro... Which is another reason to make it a note not rec-track
#rhiaro... But helps at least capture the state of the art use of PuSH, for anyone who wants to know, here are implementations, if this is good enough for your use cases
jasnell joined the channel
#rhiarocwebber2: would make sense to specificaly call out that it won't work if you need private communication
#rhiaroaaronpk: or you can do thin pings and authenticate on GET
#rhiarocwebber2: people get pings for things they can't access?
bengo, pdurbin and cwebber2 joined the channel
#rhiaroaaronpk: no you don't ping them if they can't access it
#rhiarotantek: there's a lot of brainstomring about what's possible there, we don't know if it works yet
#rhiaro... You can't have third party subscriber endpoints
#rhiarosandro: we can say 0.5 doesn't include that functionality, but wouldn't characterise it as a dead end
#rhiaroeprodrom: if you have urls as identities you can say this subscriber endpoint is this person..
#rhiaro... If there was an easy way to move it forward then maybe they would have
#rhiarotantek: want to highlight the implementation experience. Ostatus went down that route then backed off
#rhiaro... On the resource thing, is maybe a step here to put the word out that if if someone is willing to take on the editorship we would be interested, or do we want to wait until Aaron has time?
#rhiarosandro: get a cancellable hotel now, it's peak tourist season
#rhiaroeprodrom: on the other hand, Lisbon is awesome
#rhiarocwebber2: if there is another place we can do it, I would prefer it. I'm commited to wrapping up my work and if that means I have to take a huge chunk out of my finances I will do it, but I would kind of prefer something less expensive
#rhiarotantek: if the other meeting we had resovled on last time was November in Boston
#rhiaroeprodrom: So, timeframe. Everything currently on the table should be at CR or ready to go to CR. What would we do at a face to face? September
#rhiarotantek: if we are going to do a revised CR that will be our last chance to do so, and resolve all outstanding issues
#rhiaro... If we get dozens of implementaitons, we will get dozens of issues
#rhiaro... If we're planning for success, we should expect that
#rhiarosandro: at the very least we have to go through a bunch of issues
#rhiarocwebber2: ...airbnb has affordable lodging.. I might be able to do this if we agreed on it righ tnow
#rhiaro... I think it's really important we have this meeting. This time is really important. This location.. but maybe this is the only reasonable time we'll do it. So I'm for it.
#rhiarosandro: one of the main reasons for this location is if we get people wednesday, and talking to people during tpac, to try to bring in new blood and share. Some may stop by WG meeting
#rhiarotsyesika, can you make Lisbon in September?
#rhiaroeprodrom: Can we agree to make this decision in our next telecon?
#rhiarotantek: another key reason is assuming we are doing some rechartering we would do it then
#rhiaro... Ideally better if rechartering occurs before chater expires
#rhiaroeprodrom: Feels like we have enough of a consensus to go. Everyone can make it work either in person or remotely