[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Captive-portals] Questions about PvD/API



Thanks Tommy,

I don't dispute that PvD provides an elegant set of solutions -- particularly in enterprise and other 'private' networks. I question, however, the value in public(/guest) access -- where everyone wants you to access their network over others, for 'retail analytic' or branding/attribution(/exploit) purposes. 

Another way to see the PvD integration/deployment:

1. Today, we join a network, always do a probe, which redirects to captive portal
2. A PvD URL is provide, so a captive portal notification is generated to the user (is that what 'we just make a connection directly' means?)
3. We may have also gotten RFC7710 URL, there are potentially two APIs in play at the same time, which is extra confusing (?)
4. The first NAS vendor release products with support, venues deploy and start 'fiddling' with the new feature and URL to PvD end-points
5. The first UE vendor releases products with support, start using it at said venues... complain to vendor about problems unique to this new device
6. In some networks, users complain that *only* their new PvD device is seeing a captive portal, while all their other devices do not. Staff at the coffee shop don't believe me; all their devices work too.

I think there are fundamental issues in splitting what should be 'network notification' into web APIs....

1. Tomorrow, we join a network, always do a probe, which redirects to captive portal

It wasn't clear in your e-mail if RFC7710 can be used *without* providing a URL, or is there a PvD specific DHCP option?

Thanks,
David


On Wed, Aug 16, 2017 at 9:20 AM, Tommy Pauly <[email protected]> wrote:
Hi David,

You mention in one of your emails that you'd expect there to be many "broken PvD" deployments, which would either necessitate ignoring PvD and using legacy mechanisms, or else having the user face a broken portal. My impression is that if client-side deployments fail closed—that is, if there is a PvD advertised, but it does not work well, then we treat the network as broken. If this client behavior is consistent from the start of deployment, then I would think that deployments would notice very quickly if they are broken. The fundamental part of the PvD being advertised is in the RAs—if our DHCP or RAs are broken on a network, we generally are going to be broken anyhow.

As far as where the API resides, I appreciate your explanation of the various complexities. My initial take is this:

- Where a PvD is being served is up to the deployment, and determined by the entity that is providing the RAs. To that end, the server that hosts the API for extended PvD information may be very different for enterprise/carrier scenarios than in captive portals for coffee shops.
- For the initial take for Captive Portals, I would co-locate the "PvD API" server with the "Captive API" and "Captive Web Server". Presumably, the device that was previously doing the HTTP redirects would be able to do the similar coordination of making sure the PvD ID that is given out to clients matches the PvD API server (which is the same as the "Captive Web Server").

For the captive use-case, I see the integration of PvDs as an incremental step:

1. Today, we join a network, always do a probe, which may get redirected to a captive web server
2. With RFC 7710, we would join a network and do the same as (1), unless the captive URL is given in the DHCP/RA and we just make a connection directly.
3. With the Captive API draft, we can interact with the portal other than just showing a webpage; but this may still be bootstrapped by 7710 if not using another mechanism
4. With PvDs, the mechanism in 7710 is generalized to support APIs other than just captive, and can indicate that no captive portal or other extended info is present; and the PvD API in this form is just a more generic version of the captive API that allows us to use the same mechanism for other network properties that aren't specifically captive (like enterprise network extended info, or walled gardens)

Getting into the arms race of people avoiding the captive probes: if someone doesn't want to interact with the client OS's captive portal system, they can and likely will not change anything and just keep redirecting pages. Hopefully if a better solution becomes prevalent enough in the future, client OS's will be able to just ignore and reject any network that redirects traffic, and the only supported captive portals would be ones that interact in specified ways and advertise themselves as captive networks. In order to get to this point, there certainly needs to be a carrot to incentivize adoption. My goal with the more flexible interaction supported by PvD is that we will allow the networks to provide a better user experience to people joining their networks, and integrate with client OS's to streamline the joining process (notification of the network being available, who owns it, how to accept and how to pay), the maintenance process (being able to integrate time left or bytes left on the network into the system UI), and what is allowed or not on the network.

Thanks,
Tommy


On Aug 16, 2017, at 6:51 AM, David Bird <[email protected]> wrote:

My question about where the PvD API resides was somewhat rhetorical. In reality, I'm sure you will find all of the above - In the NAS (e.g. Cisco), at the hotspot services provider, and something hosted next to the venues website. It depends mostly on how this URL is configured, and by whom. (One could imagine people doing all sorts of things). 

My question more specifically for the authors is, how would Cisco implement PvD for Guest/Public access and would it actively stop avoiding Apple captive portal detection? Or, would turning on PvD just make that 'feature' easier to implement?

On Tue, Aug 15, 2017 at 5:19 PM, Erik Kline <[email protected]> wrote:
Randomly selecting Tommy and Eric so this bubbles up in their inbox.

On 2 August 2017 at 10:36, David Bird <[email protected]> wrote:
> Could an author of PvD help me understand the following questions for each
> of the diagrams below I found on the Internet -- which represent some
> typical hotspot configurations out there...
>
> - Where would the API reside?
>
> - Who 'owns' the API?
>
> - How does the API keep in-sync with the NAS? Who's responsible for that
> (possibly multi-vendor, multi-AAA) integration?
>
> 1) Typical Hotspot service company outsourcing:
> http://cloudessa.com/wp-content/uploads/2013/08/shema-CaptivePortalSolution_beta2b.png
>
> 2) Same as above, except venue owns portal:
> http://cloudessa.com/wp-content/uploads/2013/07/solutions_hotspots-co-working-cloudessa_2p1.png
>
> 3) Now consider the above, but the venue has more roaming partners and
> multi-realm RADIUS setup in their Cisco NAS:
> http://www.cisco.com/c/en/us/td/docs/wireless/controller/8-3/config-guide/b_cg83/b_cg83_chapter_0100111.html
> describes many options -- including separate MAC authentication sources,
> optional portals for 802.1x (RADIUS) authenticated users, and so much
> more...
>
> "Cisco ISE supports internal and external identity sources. Both sources can
> be used as an authentication source for sponsor-user and guest-user
> authentication."
>
> Also note this interesting article:  the section Information About Captive
> Bypassing and how it describes how to avoid Apple captive portal
> detection!!! "If no response is received, then the Internet access is
> assumed to be blocked by the captive portal and Apple’s Captive Network
> Assistant (CNA) auto-launches the pseudo-browser to request portal login in
> a controlled window. The CNA may break when redirecting to an ISE captive
> portal. The controller prevents this pseudo-browser from popping up."
>
>
>
> _______________________________________________
> Captive-portals mailing list
> [email protected]
> https://www.ietf.org/mailman/listinfo/captive-portals
>