Grandmaster All American 10829 Posts user info edit post |
Let's say you have multiple remote locations with pretty similiar PuTTY configs that all contain hard coded IP addresses. I toyed with the idea of replacing the IPs with FQDNs so that on the rare occasion that a location's IP is changed, obviously it's one correction instead of 20+.
My only thought though is that I would be susceptible to DNS poisoning or if the account was hax0red then someone in theory could setup a redirect? I guess that's why people use VPNs, right? Am I overthinking this or should I just stick with IPs? It's already scripted, so it's not a huge deal if it had to be redeployed 20 t imes, but hostnames still seem like so much less work.
Thoughts? 2/28/2012 6:30:24 PM |
lewisje All American 9196 Posts user info edit post |
How are you assigning this hostname? Is it like a huge HOSTS file with lines like this:
... 1.2.3.4 example.com 1.2.3.5 example.com 1.2.3.6 example.com 1.2.3.7 example.com ... Or have you set up a special DNS resolver that all of those locations are configured to use, which returns an appropriate round-robin IP address for your FQDN and searches the nameservers for everything else?2/28/2012 9:26:47 PM |
Grandmaster All American 10829 Posts user info edit post |
public dns, subdomain on a TLD. 2/28/2012 9:51:54 PM |
lewisje All American 9196 Posts user info edit post |
...so this means that you've purchased a domain name and had 20 or so A records for it in a round-robin configuration?
Anyway, it definitely is susceptible to DNS cache poisoning (or a maliciously planted HOSTS file, but if an attacker can do that you have much worse problems), but unless the certificate is compromised (I'm assuming you're using SSH or a similar encrypted protocol here), the worst that would happen is that for the duration of the pharming attack, your sites would be inaccessible; as long as the certificate is secure, an attacker won't be able to redirect users of your sites to some other place, because the attacker won't be able to produce your certificate.
With that said, if additionally the DNS records are protected with DNSSEC then you shouldn't even be vulnerable to cache poisioning, because invalid records can be easily discovered and removed from the cache.
Still, if the account used to maintain those A records is itself "hax0red" then you just lost the game. 2/28/2012 10:48:29 PM |
Grandmaster All American 10829 Posts user info edit post |
Yes. Either A) creating a bunch of A records for a purchased domain or B)use 20 records on a domain owned by dyndns or some other such provider.
Let us say for the sake of hypothetical situations, if one weren't using SSH, but hosts.allow server side, would doing either A or B over hard coded IP addresses offer any more risk or vulnerability than we would already be exposed to? 2/28/2012 11:22:31 PM |
lewisje All American 9196 Posts user info edit post |
I'm not sure you can even have more than one IP address associated to a dyndns domain... ...unless you essentially "purchase" it with Dyn Standard DNS or something more expensive.
With that said, the sites themselves should only be accessible to those who need to use them, but if anyone is able to poison the cache of the users' DNS resolvers, they could be trying to connect to some rogue servers.
What you should do IMO is self-sign a cert and distribute it to your users for installation; then if they're subject to a cache-poisoning attack, their clients will complain and they'll know that something is wrong. Now what are your users using to access these sites, if not SSH? Is it Telnet? FTP? 2/29/2012 12:52:12 AM |
smoothcrim Universal Magnetic! 18968 Posts user info edit post |
you could over engineer it and have a login portal in the public cloud that has a single public, static ip with all the fault tolerance and what not. then go from that machine to whatever other machine you need. then you only need to maintain the config there. FQDN is what I would use, since you can't really fake out public dns with any sort of penetration. perhaps, configure all the clients to use public dns like google dns, rather than whatever their default is? 2/29/2012 9:35:56 AM |
Grandmaster All American 10829 Posts user info edit post |
They use public DNS (either opendns, google, or ISP's). A portal wouldn't work unless it was seamless. I guess I was just worried about hosts files and/or dyndns account getting abused. I guess if it ever got to the point at which we are hypothesizing, hostnames would be the very least of my worries? 2/29/2012 9:50:04 AM |