"It's just that if a task can be automated it should be" - This is the attitude which really scares the bejesus out of me. Just because something can be done, really doesn't mean it should be done. I can burn my house down, but it wouldn't be a good idea.
When creating a new technology it is no longer enough to just do it because you can, people need to also consider whether or not its a good idea (and if there are reasons why its not, whether there is any mitigation against that).
Your first example is not valid. A TV remote automates a dull repetitive task and does so in the moment. It is a tool not dissimilar from a hammer. A bot which makes decisions on your behalf rather than just doing your direct will is a totally different matter. I am also disappointed by your use of your grandfather to paint an implicit ad hominem attack against me as being a relic of a bygone era with no valid argument to be made today.
Your second example is more interesting and is indeed less offensive to my sensibilities than I had imagined, but at the same time doesn't include the use of AI as mentioned in the article. The introduction of AI I would imagine means something along the lines of noticing than a person acts like they prefer email and then using that information without the person specifically requesting to only receive email. I find this concerning mainly because it takes a one dimensional view of reality and then traps people within that bubble whilst also failing to know about the innevitable exceptions. For example, maybe bob prefers people to email him because he tends to get more details in an email than an SMS - your system ruins this. Maybe he generally what he really likes it to receive a response in the format he sent in - so one day he sends an SMS but your system has already put him in the email pigeon hole so he gets an email response. I could go on for a long time but I'll stop there.
RE: Personal Preference Bot Nets and The Quantification of Intention