• You MUST read the Babiato Rules before making your first post otherwise you may get permanent warning points or a permanent Ban.

    Our resources on Babiato Forum are CLEAN and SAFE. So you can use them for development and testing purposes. If your are on Windows and have an antivirus that alerts you about a possible infection: Know it's a false positive because all scripts are double checked by our experts. We advise you to add Babiato to trusted sites/sources or disable your antivirus momentarily while downloading a resource. "Enjoy your presence on Babiato"

Manga - FanFox (MangaFox) crawler

Manga - FanFox (MangaFox) crawler v1.3.1.4 Nulled

No permission to download
my website crawler progress is empty eventhought i press start its still empty then stay crawler stop sucessfully.
and cannot auto update
1608207259830.png
try to use set to local and do 2 - 3 single crawler then use private proxy for the starter the auto update
 
not working. by the way. is autoupdate manga working if you use singemanga crawl ?
some of the reason why your auto crawler not work is your ip got banned by fanfox. so is better you try to register to good private proxy. or try the free trial on ScraperAPI.
the auto update is working on my site and i use bunnycdn as storage. for single craw i use local to get some chapter, then wait it show on the queue list on the crawler progress tab. when you grab single chapter you had to stop the auto crawler. this is the step that i use.

1. i use single craw to get 70 manga that i want , i crawl atleast 10 chapter for each manga. "private proxy used" i use webshare proxy.
2. try to disable and enable the plugin " i got my auto crawler work after do this "
 
some of the reason why your auto crawler not work is your ip got banned by fanfox. so is better you try to register to good private proxy. or try the free trial on ScraperAPI.
the auto update is working on my site and i use bunnycdn as storage. for single craw i use local to get some chapter, then wait it show on the queue list on the crawler progress tab. when you grab single chapter you had to stop the auto crawler. this is the step that i use.

1. i use single craw to get 70 manga that i want , i crawl atleast 10 chapter for each manga. "private proxy used" i use webshare proxy.
2. try to disable and enable the plugin " i got my auto crawler work after do this "
Thanks for sharing your steps. Wish I could follow your steps in the beginning.
I cannot delete the queue list. I tried to reinstall the plugin but still exists. I don't know where the data stored in the database.
So, the crawler doesn't work for me and the debug log show this: Extract folder does not exist.
I don't know how to fix this. 😭
 
Thanks for sharing your steps. Wish I could follow your steps in the beginning.
I cannot delete the queue list. I tried to reinstall the plugin but still exists. I don't know where the data stored in the database.
So, the crawler doesn't work for me and the debug log show this: Extract folder does not exist.
I don't know how to fix this. 😭
the important step is that there is lot of manga title on the queue list. it's mean that the auto crawl work, even i cant delete it.
my trick is i use manual crawl " single crawl manga" to create list manga that i want to auto crawl grab. event now from 80 title manga that i list by single craw. only 35 manga that listed on auto crawl. i use auto crawl when there is a lot of update avaliable on queue list. usually there is 1-3 manga added on the queue list every day from the 80 manga that i manually crawl. so from the begining 16 manga added to auto update in couple of days it's got 35. i dont know about the licensed version. since the nulled usually not 100% work.
 
the important step is that there is lot of manga title on the queue list. it's mean that the auto crawl work, even i cant delete it.
my trick is i use manual crawl " single crawl manga" to create list manga that i want to auto crawl grab. event now from 80 title manga that i list by single craw. only 35 manga that listed on auto crawl. i use auto crawl when there is a lot of update avaliable on queue list. usually there is 1-3 manga added on the queue list every day from the 80 manga that i manually crawl. so from the begining 16 manga added to auto update in couple of days it's got 35. i dont know about the licensed version. since the nulled usually not 100% work.
Glad to know the detail. It explains a lot of my confusion.
Good news is that I found the list. At first, I thought the list is stored in the database. Then I read the crawler code and realize the list is stored in a JSON file. The location is wp-content/uploads/wp-crawler-cronjob/fanfox-manga-crawler/queue_0.json
Hope this can help you.
 
Yes, it's important to crawl at a reasonable rate. Don't try to crawl 80 chapters a minute you'll only timeout your hosting or get an IP ban
 
Hi, anyone can help me this issue, i trying to craw a single manga from fox, similar like this screenshot below
1608559799359.png

but when i try to view how chapter look like, still have sample image
1608559854079.png

please help me. thanks!
 
Hi, anyone can help me this issue, i trying to craw a single manga from fox, similar like this screenshot below
1608559799359.png

but when i try to view how chapter look like, still have sample image
1608559854079.png

please help me. thanks!
what version fanfox crawler that you use, i got that result when use the old version
 
1 more question, what if we just want craw manga as daily ranking, example: http://fanfox.net/ranking/
dont want to craw all data. it impossible?
don't know the answer yet maybe is possible, i use manual craw to get the manga list that i want. then manually add it to the queue list you can check @shemmy post in this thread, but i add my list on the upload.json
and put the code
{
"king_of_hell": {
"slug": "king_of_hell",
"name": "King Of Hell",
"url": "https:\/\/fanfox.net\/manga\/king_of_hell\/",
"is_update": true,
"post_id": 756
},
"mangathe_ghostly_doctor": {
"slug": "mangathe_ghostly_doctor",
"name": "The Ghostly Doctor",
"url": "https:\/\/fanfox.net\/manga\/the_ghostly_doctor\/",
"is_update": true,
"post_id": 576
} <--- put "," when you want add another manga but for the last dont add ","
}
 
can we change local storage to another like google drive, google photos ? I try crawler 1000 magas that occupy 200gb space @@
 
i already reach 250.000 inode limit of my hosting so i'm complementing to use BunnyCDN or Amazon S3 does it increase inode or not?
 
AdBlock Detected

We get it, advertisements are annoying!

However in order to keep our huge array of resources free of charge we need to generate income from ads so to use the site you will need to turn off your adblocker.

If you'd like to have an ad free experience you can become a Babiato Lover by donating as little as $5 per month. Click on the Donate menu tab for more info.

I've Disabled AdBlock