Is it possible to change videocache.py to minimize the amount of traffic requested. For now it's using all available bandwidth, but i only want it use half for downloading objects to cache.
I saw that it's possible to throttle urlgrabber to the speed that i want, but i don't know for sure how to do it.
16 Answers
Hello,
First of all, let me tell you that you are 50th user on our forum :D
You can control the amount of bandwidth used by videocache, using max_parallel_downloads option in /etc/videocache.conf. We have not implemented throttling using url grabber yet. We may consider as a feature for the future.
Thank you for considering videocache :)
Hi!
If you really want fine grained control over bandwidth used by videocache for downloading, you can look-up delay pools for squid. First explore delay pools.
Now, considering you know delay pools well, create a delay pool for your proxy server itself (because videocache is using your proxy server to download videos) and limit the bandwidth there. With delay pools, you can limit bandwidth using every possible acl offered by squid :D
So, instead of messing up with python code, invest time in delay pools.
But if you still feel you want scratch videocache code, here is the code repository :D
Thank you for using videocache :D
Hi,
After spending a couple of hours seing all the code, and thinking on how little i knew on python :) , i found how to control it. I will post it here the how to. if you wish, you could add it to the instalation / configuration manual
First, find the path of the urlgrabber in you system.
for me (debian etch x86_64)
/var/lib/python-support/python2.4/urlgrabber
on other systems, find the directory by typing
find / -name urlgrabber
Edit the file grabber.py and in the "throttle = 0", place the speed for each download.
Hi!
That was a quick hack!! Anyways you got it working now. I'll try to get into core code in next version if possible :) Thank you digging that out for us :D
well, but with max_parallel_downloads the problem persist, even if i define only 1 download max. the connection will attempt to retrieve the file at full speed, witch will consume all the bandwidth. This is being a big issue because we cannot allow that if a person request a 100Mb file, the internet for everyone get slow for a couple of minutes.
What i've seen is that it's possible to implement it. with your permission, i would like to starting to mess with the code to see if i can do something about it. (i never programmed in python, but what the heck... :) )
Sirkikee,
Can you please post your delay pools configuration on forum, so that other users can benefit from it?
Thanks in advance :)
256k of bandwidth for download and global bursts of 512k
acl squidlocal src 127.0.0.1
delay_pools 1
delay_class 1 1
delay_parameters 1 32768/65536
delay_access 1 allow squidlocal
delay_access 1 deny all
checking the delay_pool, i found this... is very strange...
http://www.elblogmio.com.ar/wp-content/uploads/2009/02/squidlocal_pool.jpg
the delay_pool works fine, but... the cached files are accessed again through squid, but not all files, i try another files that are cached and do not are accessed through squid.
perhaps I should use always_direct for public IP of cache_host ?
Sirkike,
I think always direct would be a nice hack for this. Please try and let us know.
Thanks in advance :)
Don't works, is still looking for the videos through squid cache. I think i need to put the cache_host to the local lan IP of server, and not the public IP.
Sirkike,
I think you should try going this way.
I think this setup should work fine.
Thank You!
Yes... that setup works fine, but... i need put cache_host at public IP. :P
Sirkike,
Well, then follow the same procedure replacing lan IP with your wan IP and using always_direct for your wan IP.
Thank You!
Yes, yes it did. But like I said earlier, even if you use always_direct anyway when you order the video on the public address 200.XXX/videocache/0/etc passes through the cache 127.0.0.1 as pictured. Still, no matter ... not all the videos, so no problem.
Let me tell you've done a great job, this utility is very useful.