And you can also not even configure a local network anymore without
    access to the internet. 
     
     
     
    It's like those internet streaming devices that can also access the
    local network that people hate. There are internet radio's that will
    only work if they can contact the internet. There are alarm clocks
    that stop working if your internet access goes down. 
     
    This system is not good Is all I'm saying. 
     
    I hope this informs some of the decisions being made. Good luck. 
     
    Bye. Bart. 
     
     
    Op 20-3-2016 om 16:36 schreef Xen: 
     
    
      
      By the way, if UPnP was ever a problem in terms of NAT security,
      obviously the problem is much worse in IPv6, since there is not
      even any NAT and all devices are always exposed. 
       
      Now even though you are living together in a "house" all
      "residents" now need to solve these issues on their own. 
       
      This makes it almost impossible to run any kind of home server,
      because the default setup is going to be: access internet
      services, don't worry about local network access or even if you do
      have it, accept that you will be at risk of getting hacked or
      exposed constantly. 
       
      The exposure might be dealt with by the protocols (if they work)
      but there is a high chance they won't, because the model at the
      beginning is that everything is exposed. 
       
      If your premise is complete exposure, if that is what you intend
      and want, then you won't be able to achieve meaningful protection
      in any way. 
       
      If you banish all clothes and then try to find a way for people to
      not see you naked, that won't work. 
       
      "So we have no clothes anymore, how can we find a way for people
      to not be cold and to not be seen naked? Hmmm difficult". 
       
      You know, maybe don't banish the clothes. 
       
      Maybe don't banish NAT. 
       
      Maybe don't banish localized, small, understandable networks. 
       
      Maybe don't banish the boundary between the local and the remote.
      Maybe not do away with membranes. 
       
      Nature has designed life around membranes, all cells have
      membranes. "Cell membranes protect and organize cells. All cells
      have an outer plasma membrane that regulates not only what enters
      the cell, but also how much of any given substance comes in." 
       
      The basic topology for IPv6 is so deeply misunderstood and
      misdesigned from my perspective.... 
       
      That it tries to create a membrane based purely on subnet masking. 
       
      And that's not a safe thing because a misconfigured system
      automatically gives access. You want all internal addresses to be
      in the same pool (as the router accepts a list or segment of
      adddress from the ISP). The router is supposed to distribute those
      addresses across clients while allowing them to know and find each
      other, ie. by giving them the information on the subnet masks they
      are supposed to use. The subnet mask is everyone's potential and
      right to not care about any fixed boundaries between the local and
      the remote (wide) network. 
       
      Maybe you can call it empowerment (everyone has a public address).
      But it is also a huge loss of control. It's a loss of power.
      Networks that can't be configured by any individual person.
      Inability to shield anything from anyone in a real sense. 
       
      Local clients (ie. Linux and Windows computers and Mac computers
      and Android phones) now requiring the intelligence to safely
      distinguish between local and remote services, a problem that was
      never even solved in IPv4, let alone that IPv6 even stands the
      slightest chance of meaningfully solving it. 
       
      All of these devices needing to perfectly cooperate in order to
      find and know the local network. Particularly if there is a
      segmentation between "secure" and "insecure" or between "guest"
      and "resident". And what if you want two subnet masks for a
      different purpose? A managed switch has means to physically
      separate (in a way) two different nets on the same cables. You may
      be wanting to run a network of servers in your home that is
      separate from your local home network. You lose pretty much all
      control in being able to do this effectively. 
       
      Even if IPv6 gives some freedom or liberation, it is mostly due to
      the router allowing this. Everyone his own IP address. Everyone
      his own front door. People love that, in a way. But it also means
      you no longer have a family. 
       
       
       
       
       
       
       
      Op 20-3-2016 om 16:05 schreef Xen: 
       
      
        
        Op 20-3-2016 om 11:56 schreef Tim Coote: 
        
          
          
          Is it intended that NetworkManager will conform to /support /
          exploit the Homenet network name and address assignment and
          routeing protocols (http://bit.ly/1LyAE7H),
          either to provide end to end connectivity or to provide a
          monitoring layer so that the actual state of the network
          topologies can be understood?
           
           
          Home or small networks seem to be getting quite
            complex quickly and way beyond what consumers can be
            expected to understand/configure/troubleshoot/optimise, with
            one or two ISP’s per person (via mobile phone + wifi
            connectivity) + an ISP for the premises; and wildly
            differing network segment performance characteristics and
            requirements (e.g. media streaming vs home automation). 
         
         
        I  can't answer your question but in my mind there are only 2 or
        3 issues mostly: 
         
        - a mobile phone that connects through home wifi, external wifi,
        or mobile/3G/4G connectivity, may access predominantly internet
        (cloud) services and not have access to anything in the home
        network by default, and there is not really any good platform to
        enable this. 
         
        (all that it needs is a router really that provides loopback, an
        internet domain, and a way to access LAN services both from the
        outside and inside) (but most people don't run services on their
        network anyway except when it is some appliance-like NAS) 
         
        (but If you're talking about media streaming and home
        automation, this outside/inside access thing becomes important) 
         
        Other than that there is no issue for most people. If your
        mobile app is configured with an internal IP address, you get in
        trouble when you are outside the network, if you configure it
        with an external address, not all routers will allow you to
        access it from the inside. 
         
        For example the popular (or once popular) D-Link dir-655 router
        doesn't allow it, while all TP-Link routers are said to support
        it (a support rep from China or something demonstrated it to me
        with screenshots). 
         
        - I don't think IPv6 is a feasible model for home networking and
        configuring it is said to be a nightmare even for those who
        understand it. I don't think it solves any problems, or at least
        doesn't solve it the right way that makes it easier to use home
        networking. I think IPv6 is a completely flawed thing to begin
        with, but as long as it stays on the outside, I don't care very
        much. NAT shielding from the outside is a perfect model. Anyone
        requiring network access from the outside should be in a
        situation where they are able to configure it (port forwarding).
        Even where you could (as an advanced user) require 2 or more IP
        addresses at home, you still don't need a 100 or 65535. IPv6 in
        the home solves problems practically no one has, and opens up
        every device to internet access without any firewall in between.
        If home networking is only defined by subnet mask, it becomes a
        pain to understand how you can shield anything from anyone. You
        have to define your home network in public internet IP address
        terms. No more easy 192.168.1.5 that even non-technical users
        recognise. If there's no NAT, you're lost, and only wannabe "I
        can do everything" enthusiasts really understand it. 
         
         
        When I read the Charter of that homenet thing, it is all about
        IPv6: https://datatracker.ietf.org/wg/homenet/charter/ 
         
        Their statements are wildly conflicting: 
         
        "While IPv6 resembles IPv4 in many ways, it changes address
        allocation principles and allows 
        direct IP addressability and routing to devices in the home from
        the Internet. ***This is a promising area in IPv6 that has
        proved challenging in IPv4 with the proliferation of NAT.***"
        (emphasis mine) 
         
        "End-to-end communication is both an opportunity and a concern
        as it enables new applications but also exposes nodes in the
        internal networks to receipt of unwanted traffic from the
        Internet. Firewalls that restrict incoming connections may be
        used to prevent exposure, however, this reduces the efficacy of
        end-to-end connectivity that 
        IPv6 has the potential to restore." 
         
        The reality is of course that people (and games/chat
        applications) have always found a way around the problems. UPnP
        port forwarding, while not perfect, has been a make-do solution
        that basically allowed any application end-to-end connectivity
        as long as you don't require fixed ports (few people do). 
         
        The internet has been designed around two basic topologies:
        client-server and peer-to-peer. Client-server has never been
        challenging with NAT, only peer-to-peer has. For example, online
        games always use a client-server model in which NAT is a
        complete non-issue. The only times when NAT becomes an issue is
        with direction connections between two clients (peers) which is
        a requirement mostly for communication applications and/or
        applications that require high bandwidth (voice, video). 
         
        Ever since UPnP these devices can simply go to any router on the
        network and say "open me up" and they'd have access. I would
        have preferred the thing to be a little different (didn't really
        look at the specs, but you have a feel for it) giving more
        control to the home network operator (the way it is now, it is a
        haven and a heaven for viruses, trojans and backdoors) but
        essentially as soon as a central server coordinates the setup
        between two peers, they are done and the issue is solved.
        Servers by definition do not need to contact clients on their
        own. Devices with random addresses by definition do not need to
        be reached via a static, well-defined or well-known address that
        lives outside of any server that negotiates access to it. 
         
        If a server is negotiating access to it anyway, it can also help
        in setting up the connection and this has always been done, ie.
        instant messenger chat applications (MSN, Yahoo, etc.) while
        being very simple programs, worked through this model. If you
        talk about a modern age of smartphone apps, they all work
        through the same model: cloud interfaces, messages often being
        sent through the server, and in other cases peer-to-peer
        connections being setup by central (cloud) servers. 
         
        The challenges to MY mind are really: how can I setup
        home-controlled "cloud" services that seemlessly integrate into,
        or sync with, cloud based (backup) solutions? How can I bridge
        the local and the remote? That is MY issue. 
         
        You can either setup some NAS in your home that has cloud, or
        you can acquire it from a remote service provider, but you can't
        do both or anything in between, is what I mean. 
         
        And yet, they say: 
         
        "Home networks need to provide the tools to handle these
        situations in a manner accessible to all users of home networks.
        Manual configuration is rarely, if at all, possible, as the
        necessary skills and in some cases even suitable management
        interfaces are missing." 
         
        So basically they want to create an automated, complex algorithm
        and protocol-based resolving architecture that will solve an
        incredible difficult problem that was created by introducing
        something that was not needed. 
         
        "Restore end-to-end connectivity". End-to-end connectivity was
        never an issue after the advent of UPnP port forwarding. You
        could redesign UPnP (or I could) and the system is really simple
        in essence, and essentially what you need even if it opens op
        backdoors and potential for acquired malware, but that's the
        same way everywhere: your kids can open up the windows and doors
        to your house, unless you lock those doors and windows. 
         
        You don't go and solve that problem by giving all the kids and
        residents in your house a front door of their own, and then
        worry about how you can prevent unwanted access to those doors
        and windows. 
         
        And when everyone has a front door of their own, there are no
        more hallways, and then you go and think about how you are going
        to solve the problem of having no hallways. 
         
        And then you start thinking about several kinds of locks and
        keys, and having locks to your doors that only residents have,
        or locks that also outsiders can access, and how can you
        distribute those keys and locks, and it is too complex for
        people to do it on their own, so we need computers to solve it
        for us...... 
         
        And you've really lost it. You've lost your sanity and your
        sensibility. 
         
        In my eye NetworkManager can't even do the basic thing of a NAT
        IPv4 home network right. Now you want it to provide advanced
        tools for topology monitoring. 
         
        I would redesign NetworkManager to begin with, but that's just
        me. There are some bad choices being made that create a huge
        amount of problems as in most current software. I see no end to
        the nonsense that is being introduced, pardon my language. I
        mean in general, not here. I think most people have a bad feel
        for what makes elegant software, and you can see it in systems
        such as: 
         
        - systemD, 
        - linux software raid (mdadm) 
        - any number of thing that I don't know enough about. 
         
        Linux people propose a mindset of "don't think, just do" or
        "don't criticise, just get your hands dirty" ie. they profess a
        belief in not really thinking things though, but just getting to
        work on the thing that already exist. Then maybe, after you have
        earned your reputation, maybe you will come to be in the
        position where you can design stuff. And the open source
        development model favours this. 
         
        I was involved in the creation or thinking up of some new
        feature of the Log4J logging platform for java. What happened is
        that an outsider other than me suggested a new feature and
        showed a proof of concept. I tried to discuss the architecture
        of what was really needed, but the project lead just took the
        idea, whipped up a solution in no time, that coincided with what
        was already there, pushed it, had a vote on it, and basically
        just made it a part of the core package. The original person and
        myself were left out in the dark and the original person was
        never heard of again. 
         
        Log4J version 2 is flawed and some of it requires a lot of
        thinking, but the solution that was included basically uses the
        existing architecture to the max thus requiring very little
        coding. It means it also has all the flaws that the main
        architecture has. 
         
        If an architecture has flaws, the more it tries to do, the
        bigger the problems become. 
         
        If your foundation is flawed, it doesn't matter very much if you
        try to do very little. The more you try however, the bigger the
        systems are that you build on top of it, and the more of a
        cancer they become. 
         
        I'm sorry if I make it sound if I feel this way about
        NetworkManager, in essence it is still a small system. But there
        is no end to the problems you create if you are going to try to
        implement any sense of protocol or complex problem solving thing
        in this. Even if you want a monitoring tool for IPv6 resolution
        protocols, you'd need to try to make it as standalone as
        possible, and to try to reduce the contact surface with
        NetworkManager in a way. In a sense of okay it's there, we can
        write a plugin for using it, but we have to stay a bit remote
        from it so its problems don't affect us as much. Like the idea
        of being a more older brother or a more mature mentor ;-)..... 
         
        That's when you talk about monitoring. 
         
        If you want to actively support those protocols. There are three
        scenarios: 
         
        1. create a different version of NetworkManager that adds onto
        the core and wraps around it in a way. Don't make this part of
        the core, try to keep creating and fixing and evolving the core
        on its own. 
        2. try to create independent libraries that contain all the
        complexity and mathematics of the resolution protocols, that are
        easy to use and that provide a simple API that NM can hook onto,
        which means both don't affect each other as much, and if NM
        wants to provide an interface to handling these things, that's
        fine, the complexity of it will just be up to a library writer
        to solve. 
        3. Keep evolving or redesign the core product. Solve the
        problems it has. Then do any of the above. 
         
        All and all what you want is more elegancy and simplicity in the
        entire thing, even if the IPv6 addition is a tangle of wires on
        its own accord. 
         
        You know, make it easy for people to switch between IPv4 and
        IPv6, or even make it possible to use both at the same time, I
        don't know. 
         
        Sorry for the rant. If everything in this life was up to me, I
        would: 
         
        - throw away IPv6 and design something that actually made sense 
        - change NetworkManager by thinking about the interface to users
        first and then designing everything around that 
        - dump Linux altogether and try to exit this planet ;-). 
         
        I would do a lot of other things. There is no end to the
        projects I would undertake. There is also no end to the tears I
        witness every day in my life. 
         
        I'll see you, and sorry for the implications that this is in any
        way a flawed product, I do believe there is a competitor to it
        now (in the Linux world) but since this is your home bread and
        home stead, pardon my offensive language. Please. 
         
        Regards. 
         
         
        Bart. 
       
       
       
      
       
      _______________________________________________
networkmanager-list mailing list
networkmanager-list gnome org
https://mail.gnome.org/mailman/listinfo/networkmanager-list
 
     
     
  
 |