LAFlId   $0<FXRZ^http://1337x.tolinkchoice?show?@P ! seed: size:titleindexseed1size1/100"0 2019value coll-4 size?>?<15Oseeds">?<1Ohref="@"1/O/">?<http@O1 coll-1 name Example;=?A ) \ `??
[Automate advanced class]

This is one of the best things you could do on Automate, getting information from a website without bothering open a browser, just a simple notification pop up "here is something you interest".

Now you could achieve this by request an rss feed, use json or xml decode and read through returned dictionary value (example can be found on community). However not all website provide rss feed, so we'll have to dig through their html website... this gonna be a long post so bear with me.
Restriction and limitation at the bottom

STEP 1: getting the source code
Girl in a jacket
This is the easy step, just call an HTML request block, enter target url, and save to variable as text and you're done! But it is important to know that we're getting, so how do we read that HTML code? - well obviously you can choose Save to file and open saved file with text editor, however if that's a large file, your Android device gonna have a hard time trying to edit it.

An easier method is Chrome, just download it from Play store and open that website you need, now on the url add "view-source:" to the very beginning, so your url with look like this:
and Chrome with show you the source code received.

You can try this feature on desktop too, but note that websites may have different html layout for desktop and mobile!

STEP 2: Slice up the data
Girl in a jacket *pic unrelated
Now the html code are saved as a single, long string, so we need to cut the string into array of desired information, but if you does not need a list of informations and only need a bit of text just skip to step 3.
For this we'll use split() function, with split we can cut the string into a list of array, separated by a given substring, for example: split("example","a") will return ["ex","mple"]
This is the tricky part, to find a perfect substring that can be used to split html page into an array, each part contains the data entry we need.
Example web format:(not representative for actual site, normally they've much messier)
css stuff
<img-url: "muchsexy.png", title:"ex">
<img-url: "wowsexy.png", title:"am">
<img-url: "verysexy.png", title:"ple">
goal: get sexy.png files

Now you must read through the HTML source code, find where your desired information lies, then find the snip of text that separate those information, on the example above we can see "url" might do the trick, but there's a "url" at the top that does not have what we need, so further reading suggest that "img-url" is better suitable.

You could use Find in page feature in Google Chrome to make sure your string does not split unwanted part.

And now we just use block Set variable: split(html,"img-url") to get that array.

FINAL STEP: Dig out the gold
Girl in a jacket *pic unrelated

After split the HTML, we will get an array like this:
[<html> site-url: css stuff <body><,: "muchsexy.png", title:"ex"><,: "wowsexy.png", title:"am"><,: "verysexy.png", title:"ple" </body> </html>]
(value index are separated by red comma ,)

There's only a bit of unwanted text that keeps us from the final data goal, so we'll filter through each array value to grab that final piece.
For this we can use For each block: array... But wait! check again, there's still the first index of the array that we don't need, and so we will use Slice(array,1) to cut out the first index of that array, without having to call another array block.

For each in: slice(array,1) entry value, we will again, expertly split out those unwanted information, in this case, with split(entry_value,"\"")[1] to cut the double quotation (") and grab the 2nd part, which is sexy.png, our goal. Now you can just put this in an Array add block to create a list of information you just dig, or Notification show, or variable set block... go wild!

If you also need another bit of data in that array, like title in the example given above, you could use another split(value,"\"")[3]. But sometime that data does not lie conveniently at that same location, in which case you could improvise and try a bit creativity, like split(split(value,"title:")[1],"\"")[1]

Because not all website are the same, you'll have to try different method and improvise from time to time for that little bit of string.


This flow is also an example that dig in, and return list of movie, with seed, size, and open url when you select one, you can try reverse-engineer it for better understanding.

❕ Some website does not allow automation and require Captcha to enter (protect from DDoS attack and such), In which Automate will fail from step 1.
❕ Does not work on website that need you to login, like Facebook, Twitter,... in such case please use provided API from that website, an rss generator may available.
❕ Some site like YouTube and Google are much more complex than others, as they are mixed with Java script and variable and is harder to crack. (I once created "Google search result url" but is lost while switching to new device, I'm still crying inside)
❕ Sometimes a website might change their html source code, and cause your flow to fail and obsolete, where you must read and dig again. Although most site only does this every few years or not at all. HOW TO?