Hi David,My thoughts with regards to RFC 7710 is that it is not deployed as far as I know, and no client stack respects the value sent in 7710. Without some API extensions, it isn't directly better than what we currently have. Ideally, this would not be an API that would get deployed if we were also using PvDs. My concern is that if PvDs are used for enterprise and private networks, we'll have a very similar but less complete path based on RFC 7710. We could end up deprecating or replacing that RFC, which was mentioned in our last meeting. I don't think RFC 7710 can be used without a URL, which is why I think we need a solution that does a better job of indicating the lack of captive or other extended network info.I would hope that since both iOS and Android stack developers are working on the UE side, we would actually see UE deployment of PvDs before any captive vendors adopt PvDs, and we'd be standardizing around Cisco/etc enterprise deployments. By the time there were NAS vendors deploying, they would test with both iOS and Android devices to validate support.Basing our standards on the idea that devices (either networks or UE's) may implement the RFCs incorrectly seems to be a difficult starting point.I like the point you bring up of splitting network notifications from web APIs. There is a need to be judicious about what properties fall into each category. I think you're saying that the fact that there is a captive network can be signaled via ICMP, etc, as a network-level property. While ICMP is a fine solution for giving the UE hints when something has expired, I am concerned that (possibly unsolicited) network signaling is not the correct mechanism for the content details of the network, whether that is the enterprise network properties, or the captive network Terms & Conditions, tokens, expiration timers, and URLs for various kinds of user interaction. An JSON API is one form of grabbing information—I don't think we should necessarily interpret that as something that is a high-level Web interaction. We could create some custom protocol over UDP like DNS records to get the information (that would be a lot of new protocol work here that people may not be willing to get into), but the key is that it needs to be the choice of a UE device that understands how to request and parse content that initiates a lookup, and can fetch information from the network infrastructure.With regards to your assertion that we'll always revert to doing a probe, I still would like to believe that if we have a network that advertises a PvD with no extended information, or extended information that doesn't include a captive portal, we can avoid the probe altogether. Will we still have networks that redirect HTTP requests? Yes. But that's no different from the scenario today in which a network whitelists our captive detection probes. We can still get to a captive portal once the user goes into the browser. We can stop doing probes whenever the RA on the network indicates that it supports explicit signaling about network properties. If a network operator wants to invoke the system-level captive interaction, then they will follow the RFCs we come up in the CAPPORT group as long as UEs end up deploying support first. If they want to avoid it, or they have a broken network, things will be like networks that whitelist our probes today. Not great, but still possible for the user to get through. My main goal in these standards is to make it possible for a network to give the user a good experience; not to make it impossible for the user to have a sub-par experience (since I don't think that goal is achievable).Best,TommyOn Aug 18, 2017, at 5:52 PM, David Bird <[email protected]> wrote:Thanks Tommy,I don't dispute that PvD provides an elegant set of solutions -- particularly in enterprise and other 'private' networks. I question, however, the value in public(/guest) access -- where everyone wants you to access their network over others, for 'retail analytic' or branding/attribution(/exploit) purposes.Another way to see the PvD integration/deployment:1. Today, we join a network, always do a probe, which redirects to captive portal2. A PvD URL is provide, so a captive portal notification is generated to the user (is that what 'we just make a connection directly' means?)3. We may have also gotten RFC7710 URL, there are potentially two APIs in play at the same time, which is extra confusing (?)4. The first NAS vendor release products with support, venues deploy and start 'fiddling' with the new feature and URL to PvD end-points5. The first UE vendor releases products with support, start using it at said venues... complain to vendor about problems unique to this new device6. In some networks, users complain that *only* their new PvD device is seeing a captive portal, while all their other devices do not. Staff at the coffee shop don't believe me; all their devices work too.I think there are fundamental issues in splitting what should be 'network notification' into web APIs....1. Tomorrow, we join a network, always do a probe, which redirects to captive portalIt wasn't clear in your e-mail if RFC7710 can be used *without* providing a URL, or is there a PvD specific DHCP option?Thanks,David______________________________On Wed, Aug 16, 2017 at 9:20 AM, Tommy Pauly <[email protected]> wrote:Hi David,You mention in one of your emails that you'd expect there to be many "broken PvD" deployments, which would either necessitate ignoring PvD and using legacy mechanisms, or else having the user face a broken portal. My impression is that if client-side deployments fail closed—that is, if there is a PvD advertised, but it does not work well, then we treat the network as broken. If this client behavior is consistent from the start of deployment, then I would think that deployments would notice very quickly if they are broken. The fundamental part of the PvD being advertised is in the RAs—if our DHCP or RAs are broken on a network, we generally are going to be broken anyhow.As far as where the API resides, I appreciate your explanation of the various complexities. My initial take is this:- Where a PvD is being served is up to the deployment, and determined by the entity that is providing the RAs. To that end, the server that hosts the API for extended PvD information may be very different for enterprise/carrier scenarios than in captive portals for coffee shops.- For the initial take for Captive Portals, I would co-locate the "PvD API" server with the "Captive API" and "Captive Web Server". Presumably, the device that was previously doing the HTTP redirects would be able to do the similar coordination of making sure the PvD ID that is given out to clients matches the PvD API server (which is the same as the "Captive Web Server").For the captive use-case, I see the integration of PvDs as an incremental step:1. Today, we join a network, always do a probe, which may get redirected to a captive web server2. With RFC 7710, we would join a network and do the same as (1), unless the captive URL is given in the DHCP/RA and we just make a connection directly.3. With the Captive API draft, we can interact with the portal other than just showing a webpage; but this may still be bootstrapped by 7710 if not using another mechanism4. With PvDs, the mechanism in 7710 is generalized to support APIs other than just captive, and can indicate that no captive portal or other extended info is present; and the PvD API in this form is just a more generic version of the captive API that allows us to use the same mechanism for other network properties that aren't specifically captive (like enterprise network extended info, or walled gardens)Getting into the arms race of people avoiding the captive probes: if someone doesn't want to interact with the client OS's captive portal system, they can and likely will not change anything and just keep redirecting pages. Hopefully if a better solution becomes prevalent enough in the future, client OS's will be able to just ignore and reject any network that redirects traffic, and the only supported captive portals would be ones that interact in specified ways and advertise themselves as captive networks. In order to get to this point, there certainly needs to be a carrot to incentivize adoption. My goal with the more flexible interaction supported by PvD is that we will allow the networks to provide a better user experience to people joining their networks, and integrate with client OS's to streamline the joining process (notification of the network being available, who owns it, how to accept and how to pay), the maintenance process (being able to integrate time left or bytes left on the network into the system UI), and what is allowed or not on the network.Thanks,TommyOn Aug 16, 2017, at 6:51 AM, David Bird <[email protected]> wrote:My question about where the PvD API resides was somewhat rhetorical. In reality, I'm sure you will find all of the above - In the NAS (e.g. Cisco), at the hotspot services provider, and something hosted next to the venues website. It depends mostly on how this URL is configured, and by whom. (One could imagine people doing all sorts of things).My question more specifically for the authors is, how would Cisco implement PvD for Guest/Public access and would it actively stop avoiding Apple captive portal detection? Or, would turning on PvD just make that 'feature' easier to implement?On Tue, Aug 15, 2017 at 5:19 PM, Erik Kline <[email protected]> wrote:Randomly selecting Tommy and Eric so this bubbles up in their inbox.
> ______________________________
On 2 August 2017 at 10:36, David Bird <[email protected]> wrote:
> Could an author of PvD help me understand the following questions for each
> of the diagrams below I found on the Internet -- which represent some
> typical hotspot configurations out there...
>
> - Where would the API reside?
>
> - Who 'owns' the API?
>
> - How does the API keep in-sync with the NAS? Who's responsible for that
> (possibly multi-vendor, multi-AAA) integration?
>
> 1) Typical Hotspot service company outsourcing:
> http://cloudessa.com/wp-content/uploads/2013/08/shema-Captiv ePortalSolution_beta2b.png
>
> 2) Same as above, except venue owns portal:
> http://cloudessa.com/wp-content/uploads/2013/07/solutions_ho tspots-co-working-cloudessa_2p 1.png
>
> 3) Now consider the above, but the venue has more roaming partners and
> multi-realm RADIUS setup in their Cisco NAS:
> http://www.cisco.com/c/en/us/td/docs/wireless/controller/8-3 /config-guide/b_cg83/b_cg83_ch apter_0100111.html
> describes many options -- including separate MAC authentication sources,
> optional portals for 802.1x (RADIUS) authenticated users, and so much
> more...
>
> "Cisco ISE supports internal and external identity sources. Both sources can
> be used as an authentication source for sponsor-user and guest-user
> authentication."
>
> Also note this interesting article: the section Information About Captive
> Bypassing and how it describes how to avoid Apple captive portal
> detection!!! "If no response is received, then the Internet access is
> assumed to be blocked by the captive portal and Apple’s Captive Network
> Assistant (CNA) auto-launches the pseudo-browser to request portal login in
> a controlled window. The CNA may break when redirecting to an ISE captive
> portal. The controller prevents this pseudo-browser from popping up."
>
>
>
_________________
> Captive-portals mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/captive-portals
>
_________________
Captive-portals mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/captive-portals