Now secured with DNSSEC

In the last few days, all my web services has been secured with DNSSEC. I have use DNSPod for some time and is pretty satisfied with their service, but after some incidents of fail to resolve on foreign places, I decided to change my DNS service. So my DNS service has been changed, and also secured with DNSSEC.

DNSSEC is a chain of trust service which authorization each DNS reply using asymmetric encryption. It starts from the top-level CA, which is “.”, and then some gTLD, like “org.”, and then the register’s domain. It’s a signing only method, so the DNS request is not encrypted and can be cached. The weakest point is that your domain registrar have total control over the DNSSEC key, so that if your domain registrar wanted to change it to another thing, it will be done. Also, the encrypt key of “.” and “org.” is both 1024 bit RSA, so there may be some possibility to break it using really big supercomputer within expire time.(there is about 1.47% possibility that you can break an 1024bit RSA key using Tianhe-2 under 6 month)

It’s a good way to prevent DNS poisoning. With DNSSEC, most respectable mail service(Google) will not be fooled by easy tricks to send the email to some MIMA server. Also, if the client’s DNS service is secured under DNSSEC, the client will not be fooled to another site.

However, there is little ISP that do right DNSSEC check inside China. One famous DNS provider inside china, 114DNS, has exactly zero aware of DNSSEC. And if the DNS record is signed with the wrong key, the 114DNS will not care and just return the malicious result.

So I set up three DNS servers to do the right DNSSEC check. One for my personal network(mail/VPCC/wiki/gitlab/backup/LDAP/WebDAV…) and another for my personal VPN. The two DNS servers using another DNS server as cache.  Now the weakest spot is that before I start my VPN the DNS is poisoned. However, as my VPN is secured using another set of RSA keys, and I never visit anywhere without my VPN on, it should be fine.

With DNSSEC, I can now have my keys published using DNS. Now my GPG key for can be auto-fetched if the DNS search is enabled. The weak point is that DNS search function is not capable of verifying DNSSEC at peer, but rely on the remote resolver. RFC4035 seems to be suggesting any client with the ability to check DNSSEC to check DNSSEC by itself. I believe gnupg is a client both can have the ability to check DNSSEC and should have checked DNSSEC. Without that function, anyone can just modify the udp package between the resolver and the client to give the client any key the attacker thinks as good. A temporary solution would be set up a DNSSEC capable resolver at localhost and dig from 127.0.0.1:53.

Whatever, having it is better than having nothing. But still, if you want to send me encrypt emails, see about page on this blog and using keys there, or make sure you are doing DNSSEC check at localhost…

Perfection is death

Being perfect is good. But trying to be perfection is just death sentence to anyone.

There is no perfection

In the theory world, there is a top for anything, and you can reach perfection just by spend enough. It’s always true that a project’s quality is linear boosted as time spent. However, it’s not. Just like speed, you can reach certain speed easily by accelerate for a certain time, but if you want more speed, more accelerating time/energy is just useless. You can never reach c even if you spend nearly infinite amount of time and spend nearly infinite amount of energy. It’s same in any project. You can get to a certain quality level with certain amount of time at begining, however, no matter how long you spend, it’s never perfect.

We use backup project as an example to explain it in detail. First, we define the perfection state of a backup project:

  • No one can access the backup data except the owner
  • The owner will never lost any useful data because of the backup

First, it’s something which can be easily done. You write a script to do differ analysis, divide data into small s3 objects, gpg encrypt it and sign it, then send it to Amazon Glacier. Just some lines of script, easy.

But when you put it into your crontab, you find something is missing. It’s not a perfect backup scheme. The data can be lost if you accidentally deleted it when you are between the backup cycle. It’s not tolerable! But you can still solve it. So you write a service, and then go into your kernel source tree, open the fs/open.c, patch the kernel, restart the system, and find not all calls are good. So change more sources, patch the kernel, restart the system, and again, and again…

You think you have a perfect solution now, every time you write the file, it will immediately transfer to Glacier; Even before the file reach the disk from the cache, it has already safely in cloud. No way to lost data now.

But problem can always raise. It’s still a long way to perfection. What if Amazon bankrupt? Easy, add the backup to Aliyun; What if your backup gpg key is lost? Print the encrypted version and post it anywhere; What if the network is down? Write another service to do watchdog job and beep loudly whenever a backup fail. Beep is of course not perfection, you need to have two private network lines to Amazon and Aliyun just to provide stable networking, so you buy AWS Direct Connect and some fuck network setup for Aliyun. But it can still fail, so you build automatic program to call Amazon and Aliyun to fix the private line when it find it finds the line is broken.

Yeah, you have a perfect backup solution. But no?

It’s still far far away from being perfection.

What if RSA is not secure? You need a private asymmetric encrypt method to make sure it’s safe(I use VXEnc~). What if your important idea is lost when typing in tty? Patch kernel again and add key stream backup. What if kernel panics? Rewrite the kernel to perfect so that to make it never panic.

But it’s still far away from being perfection.

You still need to write a git-like branch system to manage the backup-restore  history, you need to store every object’s travel history, and you need to ensure the network is good once again. Add another several provider. And you need a local offline copy, so you build a service that’s just like Glacier. You need perfection, and Earth has a possibility to nuclear war(0.7% for average given year, it is said), 0.7% data loss rate? Not tolerable! So you need to build the world’s biggest rocket launch station to send out backup copy in real time as you save a file. But it still need much more improvement to keep it secure in space.

 

You see, it can never complete.

 

I spent about 2 hours to finish the first step, but much more time has been spent since then, and I have never finished all the things in the list yet. I believe much more can be done, just to make the simple two requirement successful:

  • No one can access the backup data except the owner
  • The owner will never lost any useful data because of the backup

I developed a feeling that even all human beings spend all their life just trying to finish such a simple backup task perfectly, they will fail. Even if all human generations, one after another, spent infinite time on this simple data backup project, they will not achieve perfection.

There is no perfection.

 

There can always be perfection

Though in reality there is no perfection, you can always find some better ways for anything. You can always find something you can do to make your project better. As there are internet, you can receive far more information than your ancestor. They may live in a dreamland that they have done everything perfectly even if they can’t be sure whether or not their house can stand over next storm, but you can’t. You will always receive information about how to make something better. Those information tend to let you believe it’s easy and simple to make a better place. Your knowledge is improved than your ancestors, your ability enables you to do things which will help your project to perfection. And your brain refuse to believe anything is finished until it is perfection.

The smarter you are, the harder to lie to your brain. If you are good enough, you may find all your things that you have joined is marked as undone.

Modern lifestyle is a helper for this crisis. In the good old time, you can know when you finished a work. When you make bottles for sale, you make bottles, even though they are imperfect, you will not spend time to think that you should rob it from your customers to make it more perfect. When the bottles is out of your hand, it has finished, no more headache.

But modern days, you are a worker with multiply projects. You can not finish a part of the project and marked it as done. As you can always make change to that part, you will always to try to make it perfect. As long as you have access to that part, it is never marked as done.

As a human, you will have the Zeigarnik effect whenever there are things undone. When all things is never done, you will be mad. Everyone feels that madness in the modern society. People want to do things, but they can’t, as there are many other things to do. They want to do A, but there are BCDEFGHIJ; They want to finish B, but there are ACDEFGHIJ, and much more clearly shined in their brain than B because of Zeigarnik effect. They decide to finish J first, but their brain keep thinking of ABCDEFGHI. They decide to start a perfect timetable with a perfect J, and J will never finished as there is no perfection.

In the end, they finish nothing.

But still, ABCDEFGHIJ is in their brain, they need to do it. So they browser the internet try to find something for B, and find a good way to solve part of C, they did it, and remember B is not even started. Guiltily, they close the computer, see the To-Do list, and find the H, trying to do it in 5 minutes, and mobile phone rings.

Do you ever have the feeling that you have done nothing after a tired day?

Don’t you?

Henry Ford invented assembly lines to save worker from low efficiency. Some textbook says assembly lines improve the efficiency by let every one do repeated task. However, it’s not true. Assembly lines improves the efficiency by letting workers forget about their previous product, and focus on the current one. An experienced car master can easily build a car from raw metals if he wants, but even in every detail he is more experienced than assembly workers, he will never reach 1/5 efficiency of a man in an assembly line. He can build a car in 10000 hours with all the tools a worker have, but 1000 workers can do the same thing in 1 hour.

It’s not because he is not experienced. Even assembly line is filled with fresh new worker, everyone can be much more efficient than the lonely car master.

It’s because he can touch his product even when a part is finished.

The only solution to this problem is a Freeze and GTD lifestyle. For every single project, it should be a test, which tells you whether the project is finished. If a test is passed, even your guts tell you the project is in a mess, you should never touch the project again. It’s finished. Not only so, it’s frozen. In a preset period you shouldn’t do anything to improve the project even if you do want to improve it. Make a new project after the period if you still remember the project. But never think of the project when it is finished, as it will never be on your list again.

You have heard it somewhere? It seems familiar? Yes, it’s TDD. You write more production code everyday (exclude test) in TDD is not because your time is magically doubled, it’s because your code can be anything, ANYTHING, as long as it passes the test. Whenever some code passes the test, you will not and should not review it. It’s a way to fight Zeigarnik effect, just like the assembly line.

 

If you can always focus on your topic, you will have 5~10 times performance boost. It is verified data. Assembly lines make workers focus, and 10x performance is seen. Good TDD makes programmers focus, and for some programmer, 100x  performance is seen. You can also have this performance boost happen in your daily life, just do like you are in an assembly, and you will be fine.

 

danger to HTTPS, doom to SPDY

Since the BREACH attack, it seems that there is no way to transport content securely in the HTTP world.

The BREACH arrack is a HTTP version of CRIME, which recovers encrypted message by analyzing compress ratio of different media. It is well-know that people can distinct picture from text by the compress ratio, however, before CRIME, there is no easy way to detect what exactly the information is by the ratio only. But the breach always exists. The word “faster” and “sunoru” have the same length, however, the entropy(binary) of “faster” is 2.58496, and the entropy of “sunoru” is 2.25163. So, if you know the origin length(6) of the words, and also get access of the entropy of the words, you can easily obtain rich information from the results. For a “prefect” compress algorithm with a observe-only way to get information, you can get how much time different alpha is included in each word, which, generally, is not so useful(But shouldn’t be public even so). But real world compress algorithm is NOT prefect, and real world environment is NOT observe only. You can send a message to the server to determine which real world compress method the server is using, and you can obtain much more information form the simple ratio if multiply requests are made by CRIME attack.

For HTTPS, it represents a danger for web pages with simple information. For example, some banks in China using number in a picture to show how much money you have, when the picture is compressed, it is pretty easy to obtain the real number the picture shows by compress ratio. By using a precomputed table, you can decrypt millions of those “money pictures” per second with a Macbook Air. So if you find your bank is transport money number in picture, you should be aware it may be a deliberate way to publish those information to the whole net.

However, for SPDY, your app may be cracked even without deliberate setups. SPDY’s speed is based on compressed headers, which include URL, cookie, and authority token. As client will send the header wherever people visit the same site, you just need to XSS the client to a static page(eg. a 404 page~), then you can obtain all the information in the header without any painful struggle. And when you get the header, you get the URL(so the complete browsing history is public), the cookie and authority token(so the log-in status of the personal), and all the content of page. So, it’s just like that you are visiting the page using HTTP without S.

Not only HTTPS and SPDY is effected, Tor, which uses gzip as it’s compression algorithm, is also affected. But it may be not so easy to crack Tor as it reuses TCP tunnel… SSH with compress can also be decrypted this way, however, it need some small skill and lucky to do the gzip guess as you cannot easily make the user resend things.

In conclusion, SPDY is just like clear text for a careful attacker, and HTTPS is not so secure anymore…

Good news is that Network working group finally find danger in compression, and decide not to support compression any more in TLS 1.3 draft-02. Have I said that is a good news? It seems not a pleasant one for those with limited network resources…