Coding with ChatGPT 3 : Project failure (arguably not my fault)

David Durant
6 min readSep 11, 2024

--

So, this was a frustrating one. Sure, it really is my fault, even though I’d like to pretend it’s not. It’s another idea that’s been sitting in my list of potential coding projects for the better part of a decade. The premise is very simple. Google Maps offers a “location sharing” feature, which enables you to see folks who have actively chosen to share their real-time location with you on their web-based and phone-app mapping platforms. I remember when the feature was first released back in 2017…

Me: “Annie, you know I like playing with new tech. Google’s just released a new thing that lets us see where each other is in real time on Google Maps. Can you please switch it on so I can track you?”
Annie: “I dunno, feels kinda creepy. Oh, go on then.”
3–4 weeks pass…
Me: “Well, that proves that works well, let’s switch it off.”
Annie: “But I like knowing where you’ve wandered off to…”

I jest. It’s a very useful feature that we use all the time. Moving on!

At the same time, for many years, I have had a Google Home smart-speaker in my living room which, to be honest, isn’t used for much apart from controlling the lighting. For a long time, I’ve just wanted to be able to say, “Hey Google. Where’s Annie?” and have it just tell me where she is [Annie editing: Somehow, that really is creepy!]. Google has all the tech in place to do this really easily. My friend who works at Google tells me that post-probation software developers have access to all the company’s APIs, so tying something like this together and scaling it would be relatively trivial. It feels like something they’ve actively chosen not to do for some reason.

So, with a couple of successful ChatGPT-based coding projects under my belt, I thought I’d give this one a crack. I started out by leaping directly into doing the coding — we’ll see how that was a fundamental failure of thinking in a bit.

I started out by looking at the documentation for the Google Maps API. The immediately obvious thing is that there’s no API for location sharing. Google obviously has this data and, to use the API in your application, you, quite rightly, have to jump through a lot of hoops to prove who you are, so there shouldn’t be any data protection issues — but there’s no way to access the location sharing data. I wonder if it’s a server load issue — they don’t want lots of apps hitting an end point every few seconds to track where people are. If it is, that could easily be fixed by rate limiting — so I suspect that isn’t it. It probably is an issue with data sharing that I’ve not thought deeply enough about to figure out.

Anyway, Google’s own API was a bust so what could I do next? Well, I know that, when I pull up Google Maps on my PC, I can see an icon on the screen that shows where Annie is. So, that data is being sent to my computer. So, surely I can get ChatGPT to write a script to pull that information out of the HTML that’s being sent by Google for my web browser to display. Ha ha. No.

Turns out there’s a lot of issues with this. The primary, and most obvious, one being that Google really doesn’t want you to do this. While examining the project in conjunction with ChatGPT it, again quite rightly, kept expressing its concern that what I was asking it to do might contravene Google’s Terms of Service (but carried on doing it anyway).

I spent several days trying different methods to do this. It shouldn’t have been a surprise that Google Maps is a very complex website. It uses Javascript to generate a lot of the URLs the main page then uses to get extra data. I tried using a couple of different embedded JavaScript engines I called from my python code and, when that didn’t work, I disappeared into a deep rabbit hole trying to use the automated test platform Selenium to pretend to be me, using first Google Chrome and then Firefox to access the data.

I expressed my frustration to my friend at Google, who patiently explained that they have whole teams there specifically to stop people from doing the kind of thing I was trying to do. Multiple teams.

Somewhere around day four, there was an epiphany. Out of nowhere, ChatGPT suddenly said something like, “If you’re trying to get location related data, instead of using Google, why not try an alternative product like OwnTracks?” I’d never heard of it but it turns out that OwnTracks is an excellent free product that, to quote their website, “allows you to keep track of your own location. You can build your private location diary or share it with your family and friends.” I installed it on my phone, configured it to point to the webhook I’d already got set up and, in only a few minutes, was recording my location in my local database.

For a more useful application, I asked ChatGPT to also store the nearest point of interest to that location, which was quickly and easily retrieved via the free OpenCage API.

I was almost there. Now all I had to do was set up my Google Home so that, when I asked for my location, it called another webhook I’d already put in place and just read out the text it returned.

And that’s when it all came crashing to a halt.

You see, if I’d been sensible, I would have gone through all the parts of the process and made sure they were achievable before I even started. In terms of using code to interact with my Google Home, I very vaguely remembered doing some half-hearted experiments many years ago and just assumed that that part of the whole thing would be trivial.

Not so.

I started out trying to use If This Then That (IFTTT) because I was fairly sure I’d done that before — plus it had documented examples of doing exactly what I wanted to do. Turns out those examples are sadly out of date. As of August 2022, you can’t do that via IFTTT any more.

Okay, I thought, I’m sure you must still be able to do that. I’ll plough through the hellish maze of Google documentation around their Smart Home devices.

Eventually, I gave up and just posted a message on their forum. After the better part of a week and no response, I tagged a Google employee who I saw had posted there previously and, after a couple of days, they responded.

Turns out the answer is… You can’t. Google has sunsetted something called Conversational Actions, which means there’s just no way for anyone outside of Google itself to write anything that can be triggered by voice commands to Google Assistant / Home / Nest. There’s various other ways to integrate with Smart Home (via routines, phone apps, etc) but via voice from Google Home, nada.

Turns out that this is just another addition to the Google Graveyard and a really foolish one, in my opinion. Google is betting huge with their AI Gemini (which still feels really inferior to ChatGPT in the limited time I’ve experimented with the former). At the moment, Google Home is using a completely different system to answer user queries, control local devices and do other tasks, but bringing them together sooner rather than later seems an obvious thing to do. Putting off developers by removing the ability to create new voice-driven apps and breaking a lot of existing ones seems like a very short-sighted thing to do when you’re probably going to want a lot of enthusiastic supporters in the development world for your shiny new thing in the near future.

Ah well.

So that’s the end of that one. You could argue that I could have saved the better part of a week by not assuming that the Google-side of the process would “just work” or you could blame Google for seemingly pointlessly shutting down something that was priming people to develop apps for Gemini.

Or you could do both.

Anyway, Plenty more tech projects to try out with ChatGPT. On to the next one!

--

--

David Durant

Ex GDS / GLA / HackIT. Co-organiser of unconferences. Opinionated when awake, often asleep.