Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - 3dwannab

Pages: 1 2 [3] 4 5 ... 24
31
[Max] I need help! / Working MXS code to remove .hdc file
« on: 2018-05-09, 20:25:32 »
I've managed to get that working. Here's the code to remove that asset completely from the metadata and ATSOps in the time it takes to save the file. Comments in the code.

No reopening is necessary.

Code: [Select]
/* Script to remove the *.hdc file from corona renderer whether missing or not.
Written by 3dwannab
v1.0 - 09/05/20018
*/
-- clearlistener()
if classof renderers.current == CoronaRenderer then (
-- Clear from render settings
renderers.current.gi_uhdCache_file = ""
--update asset tracking
ATSOps.refresh()
-- Get current maxfile
current_filename = (maxFilePath + maxFilename)
-- Create empty array
fileassets = #()
-- Retrieves all fileassets from current_filename var
fileassets = getMAXFileAssetMetadata current_filename
-- Search in above array where there's a string matching "*filename:\"*.hdc\" type:#animation resolvedFilename:\"*\"*" and collect it
ReleaseReference_Id = (for i in fileassets where matchpattern (i as string) pattern:"*filename:\"*.hdc\" type:#animation resolvedFilename:\"*\"*" collect i.assetId)[1]
-- ReleaseReference_Filename = (for i in fileassets where matchpattern (i as string) pattern:"*filename:\"*.hdc\" type:#animation resolvedFilename:\"*.hdc\"*" collect i.FileName)[1]
-- Releases the reference found in ReleaseReference_Id array
try(AssetManager.ReleaseReference ReleaseReference_Id)catch(print "No cache file found...")
-- Needed to update file only if ReleaseReference_Id not undefined
if ReleaseReference_Id != undefined then (
savemaxfile current_filename
-- Garbage collection
gc light:true
)
)

32
I was thinking of mass editing directories so I wrote a script which should remove that array. But setMaxFileAssetMetadata isin't working like I thought it should.

I've asked why here:
http://www.scriptspot.com/forums/3ds-max/general-scripting/using-setmaxfileassetmetadata-but-not-sticking

Here's the code I posted asking the Q (maybe Deadclown may be of help or other scripters or Corona Team)
PLEASE, DO NOT USE, IF SO. TEST FILES ONLY
Code: [Select]
-- Maxfile to get metadata
theMaxFile = @"C:\\Users\\username\\Documents\\3dsMax\\scenes\\Maxstart - Copy.max"
 
-- Get the metadata
oldMetaArray = getMAXFileAssetMetadata theMaxFile
 
-- Format print the old metadata
for i in oldMetaArray do (format "\nOLD METADATA: %" i)
 
-- Search for an array where there's not string matching "type:#animation resolvedFilename" and collect
NewmetaArray = for i in oldMetaArray where not matchpattern (i as string) pattern:"*type:#animation resolvedFilename*" collect i
 
-- Set theMaxFile with the NewmetaArray
setMaxFileAssetMetadata theMaxFile NewmetaArray
 
-- Check the theMaxFile metadata. Opps! Still the same.
for i in oldMetaArray do (format "\nNEW METADATA, STILL THE SAME: %" i)

33
I have the same problem only I found this still exists in the AssetMetadata of the max file which is not removed/handled by Corona.

See excerpt from the metadata which is not listed in Max Asset Tracking showing filename:"HdCache.uhd"
Code: [Select]
(AssetMetadata_StructDef assetId:"{CD94B2BE-D5E2-4430-A30A-DEC169117561}" filename:"HdCache.uhd" type:#animation resolvedFilename:"")

To me, it looks like the filename: portion is not getting updated whereas resolvedFilename: is fine and gets updated when you update the file in the render dialog.

For the guys/girls above your problem can be solved easily by running this line in the listener which will remove it from the Asset Tracking (Not the metadata, it will still exist there).
It clears the path for the uhd path.
Code: [Select]
renderers.current.gi_uhdCache_file = ""
Ondra, shall I post this on the bugtracker?



For those that want to check run this (change the path between the quotes and point to your .max file) and search for type:#animation.
Code: [Select]
getMAXFileAssetMetadata "C:\Assets\Models\Maxstart.max"

34
I have to say I've completely overlooked a CMD command called robocopy for my backup procedure!!

It's quite amazing.

I've created a W Drive Backup.cmd file (W Drive Backup can be any filename you like) and wrote up this little beaut. One click and it runs the copy with options via the switches. I just need to find a program that can run this every hour or so.

Just don't name the file robocopy.cmd or you're in for a world of pain resulting in a continous loop of the .cmd file.

For the schedule, windows built-in 'Task Scheduler' will probably do the trick to start it upon login and the /MON:n AND /MOT:m switches will monitor changes and by time. Please refer to code for these two switches.

Code: [Select]
REM title Backing Up Made Easy. [%~nx0] by 3dwannab
@ECHO off

SET SOR_PATH1=W:
SET DES_PATH1=Z:\Backups\Drive W Backup
Start /Min "JOB: %DES_PATH1% Job" robocopy "%SOR_PATH1%" "%DES_PATH1%" /MON:50 /MOT:30 /XO /MIR /FFT /Z /XA:H /R:10 /W:10 /MT:5 /XD "$RECYCLE.BIN" "System Volume Information" /XF "thumbs.db"
attrib -s -a -h "%DES_PATH1%"

::::::::::::::::::::::::::::::::::::::::

SET SOR_PATH2=C:
SET DES_PATH2=Z:\Backups\Drive C Backup
Start /Min "JOB: %DES_PATH2% Job" robocopy "%SOR_PATH2%" "%DES_PATH2%" /MON:50 /MOT:60 /XO /MIR /FFT /Z /XA:H /R:10 /W:10 /MT:5 /XD "$RECYCLE.BIN" "System Volume Information" "*Windows*" "*microsoft*" "*dropbox*" "nvidia" "temp" "C:\Windows" "C:\ffmpeg" "C:\PerfLogs" "C:\Python34" "C:\Swsetup" "C:\temp" /XF "C:\*" "thumbs.db" "*.thumb" "*.bak" "*.sv$"
attrib -s -a -h "%DES_PATH2%"

REM @pause

:: More Info here:
:: https://social.technet.microsoft.com/wiki/contents/articles/1073.robocopy-and-a-few-examples.aspx
:: https://ss64.com/nt/robocopy.html

:: NOTES
:: ---------------------------------
:: start /min runs robocopy in minimised mode.
:: /MT[:n] where n = no. of threads being used.
:: /MIR specifies that Robocopy should mirror the source directory and the destination directory. Note that this will delete files at the destination if they were deleted at the source.
:: /FFT uses fat file timing instead of NTFS. This means the granularity is a bit less precise. For across-network share operations this seems to be much more reliable - just don't rely on the file timings to be completely precise to the second.
:: /Z ensures Robocopy can resume the transfer of a large file in mid-file instead of restarting.
:: /XA:H makes Robocopy ignore hidden files, usually these will be system files that we're not interested in.
:: /R:<n> Specifies the number of retries on failed copies. The default value of n is 1,000,000 (one million retries max).
:: /W:<n> Specifies the wait time between retries, in seconds. The default value of n is 30 (wait time 30 seconds).
:: /MON:n : MONitor source; run again when more than n changes seen.
:: /MOT:m : MOnitor source; run again in m minutes Time, if changed.

:: EXAMPLES
:: ---------------------------------
:: #8 Mirror directory excl. deletion
:: To mirror the directory "C:\directory" to "\\server2\directory" excluding \\server2\directory\dir2" from being deleted (since it isn't present in C:\directory) use the following command:

:: Robocopy "C:\Folder" "\\Machine2\Folder" /MIR /XD  \\server2\ directory\dir2"
:: Robocopy can be setup as a simply Scheduled Task that runs daily, hourly, weekly etc. Note that Robocopy also contains a switch that will make Robocopy monitor the source for changes and invoke synchronization each time a configurable number of changes has been made. This may work in your scenario, but be aware that Robocopy will not just copy the changes, it will scan the complete directory structure just like a normal mirroring procedure. If there are a lot of files & directories, this may hamper performance.

Basic explanation of code shown after :: in code above with resource links.
This will back up my W Drive (Work Folder) and C Drive with a few omissions as you can see.
The only thing I'll change after the initial backup is the thread count of the copies to maybe 6 so it's less in your face when running every hour.

If you want to autoclose the CMD delete the @pause or block it by adding :: to it like so ::@pause

35
Hardware / 10Gbe UPDATE WITH HICKUPS
« on: 2018-03-01, 17:45:12 »
10Gbe UPDATE:

I was getting these results in CDM below.



But in transferring files, I was only getting 80MB/s. Not sure what I did to get this but I swapped the LAN port on the back of the i10Gb X540-T2 and get results that were closer to the CDM results.
I thought that the two ports on that were 10GbE. Or else the top one of the two is faulty.

I'm getting real file transfer speeds (when copying a 16GB file)

NAS & HD SSD:
500MB/s from NAS to HD SSD
350MB/s from HD SSD to NAS.

NVMe SSD (960 PRO) & NAS:
430MB/s from NAS to NVMe SSD
350MB/s from NVMe SSD to NAS

What stumps me is the transfers are better to the HD SSD. The NVMe is a little bit worse. I wonder what the bottleneck would be.

36

They must be the same as the one packaged with the Zenith mobo. Yes great to have the option of a cheaper single socket card. Its more expensive in the UK of course!

Thanks for the update 3Dwannab. You have done well to get 5.5X performance. Most of the other reviews I have read give about a 3-4 X performance over 1GB speeds but thinking about it that was just with standard NAS drives. Im confused I thouight you had to chose either the 10GB card or PCI based SSD in that unit as there was only one slot?

Yeah, they prob slapped the outer casing on mine and rebranded it.

No, my workflow is two nvmes in RAID0 (2TB), which backs to the NAS every so often, then external offsite every weekend. I went ahead and got the 10g setup anyway.

Why the hell not, I like blowing my money away ;)


Btw, a bunch of Asus XG C100C 10gbit cards arrived at my office :- ). Really nice, single-port, passively cooled and you can get them for 90 euros each. Imho better than the Intel card, let's see if drivers are fine.

Same as the one I have only rebranded like I said probably. Let me know your performance please.

37
Parity checking FINALLY stopped so I could install the 10GbE (simple) setup with the DS1517+. I just installed the 10G NIC with the small bracket in the 1517+, installed the RJ45 10GbE NIC that came with my Zenith Extreme motherboard and used the ASUS switch mentioned in above posts. (RJ45 type)

Here's the WR speeds comparison between my new NAS speeds and an SSD. Lastest drivers might bump speeds up a bit. (I dunno).
(My old 1GbE setup was 120 Write and around 80 read in Seq.)
https://imgur.com/a/Ia2z0
This new setup is roughly 5.5 times better performance than the old also. Not exactly 10 times but I'm sure that's because I'm running SHR RAID. I've more testing to do.

Took about a minute in FreeCommander to finish searching a folder for one filename around 1minute.

Folder was quite large @ 455GB, 19105 folders & 102219 files.

Hope this info helps.

38

I've ended up getting a 10g switch anyway in conjunction with 2 nvmes in raid0.

I installed the 4 drives into the new one using snologys online tut on it no problem. Took half an hour.

Then installed the extra drive. So far it's taken 6 days to do 70% on a volume parity consistency. So, I've not been able to install the 10g setup. I was getting 120mb on the old 1g setup.

As far as searching files. Windows is slow compared to using freecommder explorer SW even when you uncheck search file contents. I bought the donor version of freecommder which is x64 and very good.

You cannot index a NAS like your internal drives. I've seen workarounds but yet to try them out.

Yes, mapping the drive is the same as usual. Assign Z as this is the most common server/NAS drive letter. Up to you though.

Will post back when I get the WR speeds and what searching is like when this parity finishes. (pulls out hair)


39
That's still good :)
Having these in these setup it's the CPU and RAM freq. that's the bottleneck. With overclocking to 4GHz you can get past 7GB/Sec read but similar writes.


My setup with this test was using stock threadripper 1950x [not overclocked, yet but will do eventually] and 128GB RAM @ 2,800MHz.

40
I never have to try hard at that I'm afraid.


Using this as the primary drive backing up on the regular is a good solution for me.


What's sort of W/R speeds do you get with your 10g setup?

41
CDM Results for 2x 960pros in RAID 0. 1.9TB.

:)                                                 :)                                                 :)                                                 :)                                                 :)                                                 :)                                                 :)                                                 :)

42
That make sense. Thanks.

So the RM appended to the map name could be roughness and metalness ?




43
I'd say so, but I like to stick one in just to see it in greyscale.

44
Invert the map and just mono'd it with a CC just to have it greyscale. Works a treat.



45
I will. Just never came across them before.

Pages: 1 2 [3] 4 5 ... 24