We just launched a fun little tool called Phantom Analyzer. It’s a 100% serverless tool that scans websites for hidden tracking pixels.

I want to talk about how we built it 👇

The idea came about early this year after @mijustin gave us an idea about badges/certification, which developed into “what if we could scan websites for Google Analytics?!”. But we left the idea in Basecamp for many months.
Fast forward to Halloween, we’re thinking about fun ideas we could do to entertain people. We discussed “Phantom Analytics” in response to people getting confused with our product name, and had various ideas. And then we landed on the URL analyzer idea.
Once we’d finalized the spec, Paul got to work on our Halloween themed product and then coded up the HTML/CSS for it all. I then took it, put it into a Laravel application. Nice and easy.
Off the bat, I already knew the base stack I was using.

> Laravel Vapor
> ChipperCI for deployment
> SQS for queues
> DynamoDB for the database

We went with DynamoDB as we don’t want to worry about our database scaling!
So with our infrastructure known, we had a few challenges left to solve:

> How will we scan websites for tracking pixels?
> How will we utilize the queue and check the job is done?
> How will we validate the URL?
For scanning websites, the first thing I did was write out a complex, multi-level regex matching, guzzle executing scanner. But the problem was that it didn’t automatically run the javascript, which often includes additional requests, so the results weren’t accurate.
After spending a long time on that, I was searching for website crawlers and came across Puppeteer, which is a headless Chrome Node.js API. I then searched for how to get it running on Laravel Vapor and saw that someone had already solved that challenge!
I then spent 8 hours trying to start from scratch with Puppeteer, copying from @spatie_be’s Browsershot code, but I just couldn’t get it working. I went to bed, woke up the next morning, and decided that I’d start with Browsershot and simply modify it to what I needed.
I woke up the next day, and within 15 minutes I get a screenshot generated on Laravel Vapor, hooray! I then start to modify Browsershot…

Wait a minute...

Yes, out of the box, Browsershot already had what I needed. Are you kidding me?
So I modified my job and had it all working within minutes. Browsershot returns a list of network requests when loading a web page and returns them, bloody perfect. I then simply compared them against a list of around 10,000 known third party trackers that we had.
The next step was working out how we would get permanent storage in DynamoDB without bringing in anything extra. I wanted to keep it simple. So with DynamoDB as the driver, and “resources” as the storage root, I wrote a command that cached the tracking pixels indefinitely.
One of the initial concerns I had was regarding security. We would be passing user input to the command line and that wouldn't be safe. I spoke with @marcelpociot and he gave me some great advice, and I added in some validation. The active_url rule is fantastic.
I also wanted to have a way so that if a user entered a full URL (e.g. https://t.co/66d4eLDaOu) and not just "https://t.co/GA31muKcta", it still redirected them to the correct results page. Especially since our "tidying up" was opinionated. So we ran this code.
We then had to think about how to configure our Vapor app, and I went with the following settings

> 1024MB of RAM
> 2048 of RAM for the queue (could likely reduce!)
> Warm of 500
> CLI Timeout of 180 seconds

Those settings all worked nicely.
For our Vapor layers, we ran it like this. Very cool. My first time using Layers. Incredible work by the Vapor team (@themsaid @taylorotwell @enunomaduro).
For the "is it ready?" check, I debated using a UUID but I decided that we might have multiple users trying a website at the same time, and they should benefit from the same cache entry (we cache results for 5 minutes).
So for the ping, we went super old school. Interval and redirect when done. Very effective. And when it reloaded the page, it would hit the cache, see the entry and display it.
All in all, this was a fun project to build. I love working with Paul. The only design addition I made was the bats & fade, Paul did everything else. Very grateful for that 😂
I am still in awe over how quickly we deployed this with Vapor. I'm not kidding, it was all coded up and we just created it in the UI, deployed it and we were done. Remarkable experience. Infinite scale without any server work 😎

Hope you all enjoy Phantom Analyzer!

More from Tech

The 12 most important pieces of information and concepts I wish I knew about equity, as a software engineer.

A thread.

1. Equity is something Big Tech and high-growth companies award to software engineers at all levels. The more senior you are, the bigger the ratio can be:


2. Vesting, cliffs, refreshers, and sign-on clawbacks.

If you get awarded equity, you'll want to understand vesting and cliffs. A 1-year cliff is pretty common in most places that award equity.

Read more in this blog post I wrote:
https://t.co/WxQ9pQh2mY


3. Stock options / ESOPs.

The most common form of equity compensation at early-stage startups that are high-growth.

And there are *so* many pitfalls you'll want to be aware of. You need to do your research on this: I can't do justice in a tweet.

https://t.co/cudLn3ngqi


4. RSUs (Restricted Stock Units)

A common form of equity compensation for publicly traded companies and Big Tech. One of the easier types of equity to understand: https://t.co/a5xU1H9IHP

5. Double-trigger RSUs. Typically RSUs for pre-IPO companies. I got these at Uber.


6. ESPP: a (typically) amazing employee perk at publicly traded companies. There's always risk, but this plan can typically offer good upsides.

7. Phantom shares. An interesting setup similar to RSUs... but you don't own stocks. Not frequent, but e.g. Adyen goes with this plan.

You May Also Like

IMPORTANCE, ADVANTAGES AND CHARACTERISTICS OF BHAGWAT PURAN

It was Ved Vyas who edited the eighteen thousand shlokas of Bhagwat. This book destroys all your sins. It has twelve parts which are like kalpvraksh.

In the first skandh, the importance of Vedvyas


and characters of Pandavas are described by the dialogues between Suutji and Shaunakji. Then there is the story of Parikshit.
Next there is a Brahm Narad dialogue describing the avtaar of Bhagwan. Then the characteristics of Puraan are mentioned.

It also discusses the evolution of universe.(
https://t.co/2aK1AZSC79 )

Next is the portrayal of Vidur and his dialogue with Maitreyji. Then there is a mention of Creation of universe by Brahma and the preachings of Sankhya by Kapil Muni.


In the next section we find the portrayal of Sati, Dhruv, Pruthu, and the story of ancient King, Bahirshi.
In the next section we find the character of King Priyavrat and his sons, different types of loks in this universe, and description of Narak. ( https://t.co/gmDTkLktKS )


In the sixth part we find the portrayal of Ajaamil ( https://t.co/LdVSSNspa2 ), Daksh and the birth of Marudgans( https://t.co/tecNidVckj )

In the seventh section we find the story of Prahlad and the description of Varnashram dharma. This section is based on karma vaasna.
Recently, the @CNIL issued a decision regarding the GDPR compliance of an unknown French adtech company named "Vectaury". It may seem like small fry, but the decision has potential wide-ranging impacts for Google, the IAB framework, and today's adtech. It's thread time! 👇

It's all in French, but if you're up for it you can read:
• Their blog post (lacks the most interesting details):
https://t.co/PHkDcOT1hy
• Their high-level legal decision: https://t.co/hwpiEvjodt
• The full notification: https://t.co/QQB7rfynha

I've read it so you needn't!

Vectaury was collecting geolocation data in order to create profiles (eg. people who often go to this or that type of shop) so as to power ad targeting. They operate through embedded SDKs and ad bidding, making them invisible to users.

The @CNIL notes that profiling based off of geolocation presents particular risks since it reveals people's movements and habits. As risky, the processing requires consent — this will be the heart of their assessment.

Interesting point: they justify the decision in part because of how many people COULD be targeted in this way (rather than how many have — though they note that too). Because it's on a phone, and many have phones, it is considered large-scale processing no matter what.